Microsoft Research Shows Chat GPT-4 Is Easier To Manipulate Than GPT-3.5
According to the research, GPT-4 is likely to follow misleading information more precisely than earlier versions of the AI bot.
In a study titled "DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models", researchers from various universities across America and Microsoft's own Research division found that Chat GPT-4 is more susceptible to manipulation as compared to earlier versions.
The new research paper's topic centres on the reliability of the AI tool developed by the American AI company, OpenAI. The researchers used both GPT-4 and GPT-3.5 models to conduct their research.
The paper suggests that GPT-4 "follows misleading information more precisely". As a result, a user is more likely to end up providing personal information. However, Microsoft's involvement in the research is a major sign it had an additional purpose.
Microsoft has been sparing no effort in a bid to be at the forefront of the AI space. In line with this, an earlier report claims the Redmond-based tech giant is on the verge of unveiling its maiden AI chip.
Aside from this, the company recently integrated GPT-4 into a slew of its popular software, including Windows 11. In the paper, researchers pointed out that the issues found in the AI do not appear in Microsoft's "consumer" facing output.
It is worth noting that Microsoft is one of the key investors in OpenAI. The tech behemoth provides the AI company with billions of dollars along with access to its cloud infrastructure.
Is GPT-4 worse than its predecessors?
The research has been split into various testing categories, including things like fairness, privacy, stereotypes and toxicity. Much to the delight of those who want to conduct their own tests, researchers have published the "DecodingTrust benchmark" on a GitHub site.
Although GPT-4 can be tricked easily, the researchers gave it a much higher score than the GPT-3.5 in the trustworthiness department.
"We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts," the paper's abstract states.
Jailbreaking AI isn't similar to jailbreaking an iPhone to get access to more apps. It takes some effort to get AI tools like Google Bard or ChatGPT to bypass its restrictions and provide answers they are not supposed to.
Still, one user managed to exploit ChatGPT earlier this year to tell them how to make napalm by pretending to be their deceased Grandma.
While it looks like ChatGPT is losing its lustre, AI development and extensive research appear to be continuing. Meanwhile, Baidu has announced its new Ernie Bot 4.0 AI, which the Chinese tech giant claims is as good as Chat GPT-4.
© Copyright IBTimes 2024. All rights reserved.