Stephen Hawking: AI could 'develop a will of its own' in conflict with ours that 'could destroy us'
Hawking has long warned about the risks and dangers that come with the development of AI.
Renowned physicist Stephen Hawking has warned that the rise of artificial intelligence could become "the worst event in the history of our civilization" unless humanity is prepared for the potential risks that come with it.
During a speech at the opening night of the Web Summit conference in Lisbon, Portugal on Monday (6 November), Hawking said effective AI could bring a host of societal benefits and transformation for mankind, noting that "computers can, in theory, emulate human intelligence and exceed it."
"We cannot predict what we might achieve when our own minds are amplified by AI," Hawking said.
"Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one - industrialization. We will aim to finally eradicate disease and poverty. Every aspect of our lives will be transformed.
"In short, success in creating effective AI will be the biggest event in the history of our civilization or the worst. We just don't know. So we cannot know whether if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it."
He warned that humanity must prepare for and avoid the significant risks that come with the growth of AI moving forward.
"Unless we learn how to prepare for, and avoid the potential risks, AI could be the worst event in the history of our civilization," Hawking said. "It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy."
Hawking noted that concerns have already been raised over the possibility of "clever" and efficient AI increasingly taking over work currently done by humans.
"AI could develop a will of its own, a will that is in conflict with ours and which could destroy us," he continued. "In short, the rise of powerful AI will be either the best or the worst thing ever to happen to humanity."
He called for more research, particularly "disciplinary research" into AI and how it can be used "for good" in the future and urged the European Union, governments and lawmakers around the world to work on legislation to regulate the rapidly growing sector of AI and robotics research and development.
"Perhaps we should all stop for a moment and focus our thinking on not only making AI more capable and successful but maximizing its societal benefit," Hawking continued.
"I am an optimist and I believe that we can create AI for the good of the world. That it can work in harmony with us. We simply need to be aware of the dangers, identify them, employ the best possible practice and management, and prepare for its consequences well in advance."
This isn't the first time Hawking has spoken out about the potential dangers that come with AI.
In an interview with Wired last week, Hawking said a superior and powerful AI may eventually "replace humans altogether" and lead to a "new form of life that outperforms humans."
Other high-profile tech executives have also discussed the risks and dangers of highly capable artificial intelligence and preparing for the Singularity.
SpaceX and Tesla CEO Elon Musk, who once compared AI development to "summoning the demon", has long urged global leaders and lawmakers to pay close attention to AI research and development, saying reckless advancement without proper oversight could be dangerous.
In December 2015, he launched the $1bn non-profit OpenAI - backed by several notables in Silicon Valley - to help build "safe AI" and ensure that its benefits are "as widely and evenly as possible."