'ChatGPT Falsely Accused Me Of Sexually Harassing A Student': Law Professor Says Accusations Were 'Chilling'
ChatGPT allegedly cited a non-existent news article on Turley's 'sexual harassment' case which never occurred.
Jonathan Turley, a law professor at George Washington University, warned about the dangers of AI after he was falsely accused of sexual harassment by ChatGPT, which cited a fabricated article on a supposed 2018 case.
Turley, a Fox News contributor, hasn't shied away from highlighting the dangers of artificial intelligence (AI). He has long expressed concerns about the dangers of disinformation, particularly with the widely popular OpenAI chatbot ChatGPT.
Last year, a UCLA professor and a friend of Turley's, who was researching ChatGPT, informed him that his name surfaced in a search. The prompt for ChatGPT requested five examples of sexual harassment by US law professors, along with quotes from relevant newspaper articles, to support the claims.
Law Professor Falsely Accused by Chatbot
"Five professors came up, three of those stories were clearly false, including my own," Turley told "The Story" on Fox News Monday. The AI's fabrication was the most disturbing aspect of this incident. ChatGPT invented a Washington Post story, complete with a fabricated quote, alleging harassment on a student trip to Alaska.
"That trip never occurred. I've never gone on any trip with law students of any kind. It had me teaching at the wrong school, and I've never been accused of sexual harassment," Turley clarified.
On April 6, 2023, the 61-year-old legal scholar took to X (formerly Twitter) to expose ChatGPT's defamation. The AI fabricated a 2018 sexual harassment allegation, claiming a female student accused him on an Alaska trip – a trip that never occurred.
The AI-powered robot even fabricated quotes from a supposed Washington Post article, alleging he made "sexually suggestive comments" and "attempted to touch her in a sexual manner," Turley said. On "America Reports," Turley emphasised the gravity of the situation.
"You had an AI system that made up entirely the story but actually made up the cited article and the quote," he said. Upon investigation, The Washington Post itself couldn't find any trace of the story. This is a major sign that the AI bot can entirely fabricate narratives.
ChatGPT, an AI chatbot known for its human-like conversation skills, is used by a global audience for tasks like email writing, code debugging, research, creative writing, and more. Citing his experience with ChatGPT, Turley called for responsible AI development and urged news outlets to implement stricter verification processes before using such software.
When Algorithms Inherit Bias
Echoing Turley's warnings, a recent study found that large language models (LLMs) like ChatGPT can be surprisingly easy to manipulate into malicious behaviour. Researchers were further alarmed to discover that applying safety training techniques failed to rectify the AI's deceptive tendencies.
"I was fortunate to learn early on, in most cases this will be replicated a million times over on the internet and the trail will go cold. You won't be able to figure out that this originated with an AI system," Turley said.
"And for an academic, there could be nothing as harmful to your career as people associating this type of allegation with you and your position. So I think this is a cautionary tale that AI often brings this patina of accuracy and neutrality."
Turley argued that AI, like humans, can inherit biases and ideological slants from the data it's trained on. This vulnerability was highlighted last year when a report revealed that Microsoft's AI assistant, Copilot, offered inaccurate information on US election-related queries.
Turley pointed out that an AI is only as good as its programmers. He also noted that ChatGPT has not apologised or addressed the fabricated story that tarnished his reputation.
"I haven't even heard from that company," Turley continued. "That story, various news organisations reached out to them. They haven't said a thing. And that's also dangerous. Because when you're defamed like this in an article by a reporter, you know how to reach out. You know who to contact. With AI, there's no one there. And ChatGPT looks like they just shrugged and left it at that."
© Copyright IBTimes 2024. All rights reserved.