Nick Bostrom: It would be a great tragedy if artificial superintelligence is never developed
Renowned philosopher Nick Bostrom believes it would be a "great tragedy" if human-level artificial intelligence (AI) is never developed, despite his previous warnings about the existential risk that such technology poses to humanity.
Bostrom, who heads the Future of Humanity Institute at the University of Oxford, gained worldwide attention last year with the release of his seminal work Superintelligence: Paths, Dangers, Strategies.
Following its publication, Stephen Hawking, Bill Gates and Elon Musk were among those who raised their concerns about the implications of Bostrom's book.
According to Musk, advanced AI could be "more dangerous than nukes", while Hawking suggested that it could lead to the end of humanity.
Speaking to IBTimes UK during the recent Silicon Valley Comes To Oxford conference, Bostrom said that while the dangers are real, he welcomes advancements in AI and hopes that artificial superintelligence is realised.
"I think that the path to the best possible future goes through the creation of machine intelligence at some point," Bostrom said. "I think it would be a great tragedy if it were never developed.
"I think though it would be greatly desirable to put in some effort to solve the control problem, to figure out how to set up the initial conditions for this intelligence explosion in the best possible way to increase the odds that this big future will be saved according to human values."
In order to set up the initial conditions, Bostrom suggests figuring out a desirable sequence of technological development for emerging fields, such as nanotechnology and advanced synthetic biology.
"We would want the solution to the safety problem before somebody figures out the solution to the AI problem." - Nick Bostrom
The sequence in which different technologies like these are developed is likely to have an effect on the overall risk of the development trajectory for artificial intelligence. As such, Bostrom believes that more progress needs to be made on the "control problem" through increased investment in talent and research.
"This is an example of the principle of differential technological development," Bostrom said. "We would want the solution to the safety problem before somebody figures out the solution to the AI problem."
Exactly when a human-level AI will be created is uncertain, with AI theorists speculating anything between 30 and 100 years from now. What is also uncertain is the question of whether or not the development of this form of advanced AI will be beneficial to society as a whole, or simply for the individuals and corporations that own or control the technology.
In the short term, Bostrom claims that AI technologies like self-driving cars will help society by increasing efficiency and safety. His concerns are concentrated on the more distant point in the future when machines cease being merely tools and start being general substitutes for humans across the board – able to plan and strategise and learn as well as humans.
However, despite the dire possibilities that Bostrom and his book throws up, the philosopher does not describe himself as a doomsayer.
"I try to learn more about what the landscape of the future prospects for humanity looks like, to learn where the treacherous waters are and how we can best circumnavigate them," Bostrom said.
"So I'm more interested in figuring out what we can do now that would have the greatest beneficial impact on our expected longterm future – I have more interest in that than determining the absolute level of risk or hope that we should have in the future.
"I'm not optimistic or pessimistic – I've calibrated myself as best I can to the available evidence."
© Copyright IBTimes 2024. All rights reserved.