Google Bard, Bing Chat Provide Inaccurate Report On Israel-Hamas Conflict
Google Bard and Microsoft Bing Search are inaccurately reporting a ceasefire in the ongoing Israel-Hamas conflict.
Google's widely popular AI bot Bard and Microsoft's Bing Search have been inaccurately reporting a ceasefire in Israel lately.
OpenAI's AI-powered ChatGPT chatbot took the world by storm when it arrived in November 2022. Notably, the AI-backed bot is now capable of accessing real-time information from the web. As a result, ChatGPT can provide answers in a flash. On the downside, the content that AI chatbots present can sometimes be inaccurate.
AI chatbots report inaccurate information
Lately, Google Bard and Microsoft Bing Chat have been catching flak for providing erroneous reports on the Israel-Hamas conflict. When asked basic questions about the Israel-Hamas conflict, both Bard and Bing Chat inaccurately claimed that there is a ceasefire in place.
Reportedly, Google Bard told Bloomberg's Shirin Ghaffary that both sides are committed to keeping the peace. Likewise, Microsoft's Bing Chat wrote: "the ceasefire signals an end to the immediate bloodshed."
As if that weren't enough, Google Bard showed an inaccurate death toll. On October 9, Google's AI bot was asked questions about the ongoing Israel-Hamas conflict, where it claimed that the death toll had passed 1,300 on October 11, when the date had not even arrived yet.
Why are Google Bard, Bing Chat offering inaccurate info?
Regrettably, it is still unclear why Google Bard and Bing Chat are providing inaccurate reports. However, AI chatbots have a reputation for twisting facts from time to time. This issue is known as AI hallucination.
According to the folks at IBM, AI hallucination is a phenomenon wherein an LLM (large language model) "perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate".
These LLMs are usually generative AI chatbots or computer vision tools. In other words, AI hallucination is when an LLM makes up facts and provides them as accurate reports. This is not the first time that AI chatbots have made up facts.
Back in June, ChatGPT falsely accused a man of a crime. As a result, OpenAI was on the verge of getting sued for libel. In fact, new research claims ChatGPT and Google Bard could trigger deadly mental illnesses.
The AI companies behind these popular AI chatbots are aware of this problem, which has persisted for a while now. During an event at IIT Delhi, India in June, OpenAI founder and CEO Sam Altman acknowledged it would take them about a year to perfect the model.
Furthermore, the top executive said OpenAI is sparing no effort to minimise the problem. "I trust the answers that come out of ChatGPT the least out of anyone else on this Earth," Altman said. The latest inaccurate reporting of the Israel-Hamas conflict raises some serious questions about the reliability of AI chatbots.
© Copyright IBTimes 2024. All rights reserved.