OpenAI Fired Sam Altman To Stop Him From Releasing Human-Threatening AI Model, Report
OpenAI had reportedly discovered a new breakthrough AGI that could think like humans but some staff believed this AGI would destroy humanity.
A new report has shed some light on the reason for OpenAI CEO Sam Altman's sacking last week.
Altman joined Microsoft after being ousted from OpenAI. The OpenAI board recently reversed its decision and Altman has been reinstated as CEO. However, it is still unclear why he was removed from the CEO role.
In its official statement, OpenAI attributed Altman's dismissal to his lack of transparent communication with the company.
The real reason for Sam Altman's firing
However, a report from an anonymous OpenAI employee implies the company was on the verge of developing an artificial general intelligence (AGI) model codenamed Q* that could threaten humanity.
OpenAI could have decided to deploy Q* quickly due to Altman's aggressive approach when it comes to releasing new features. However, some employees believed the model could turn out to be a dangerous threat to humans. This could be the real reason for Altman's dismissal.
What is Q*?
An anonymous OpenAI employee got in touch with Reuters to explain the real situation at OpenAI. According to the source, Q*(pronounced as Q-Star) is a major breakthrough in OpenAI's research for AGI.
It is worth noting that AGI is more powerful than AI, which is capable of learning through data provided to the model. Aside from solving complex problems, AGI can generalise information and has a human-like ability to learn from its mistakes.
The source claims OpenAI had almost developed Q*. It looked like this AGI could effortlessly surpass humans in economically valuable tasks. Understandably, Q* was being considered as a significant breakthrough in the company's advancements.
While testing, Q* managed to solve mathematical problems without any previous training. Although it was able to only solve math questions at a high school level, the researchers were impressed. Meanwhile, others raised concerns over the model's potential applications.
In fact, some OpenAI staff went on to notify the board about the discovery of Q*. Despite its impressive performance at the time of testing, the staff was afraid of this new AGI, which they believed could be a huge threat to humanity.
Altman has been working on a slew of features without worrying much about consequences. He has also admitted to pushing the veil of ignorance several times during his stint at OpenAI.
So, the OpenAI board feared that Altman would release Q* without checking and minimising its negative outcomes. As per the source, the researchers told the board that AGI could decide to destroy humanity in the future.
To recap, ChatGPT started showing its ill effects just a year after its launch. A new study suggests ChatGPT-like tools can help hackers launch cyber attacks.
OpenAI's Q* could make the situation worse if it was released without considering all possible outcomes. This probably led the OpenAI board to reconsider Altman's leadership and fire him.
However, OpenAI has neither confirmed nor denied these speculations yet. As of now, Altman has returned to OpenAI as the CEO. Now that he is back, the company might officially confirm the presence of the AGI model in the coming days.
© Copyright IBTimes 2024. All rights reserved.