Microsoft CEO Claims 'Enough Tech' Exists To Stop AI Deepfakes In US Elections
The company has put elaborate measures like watermarking in place to shield the 2024 US election from AI-fueled misinformation.
Microsoft CEO Satya Nadella believes existing technology can efficiently safeguard the US elections from AI-generated deepfakes and misinformation.
Generative AI, which alludes to a type of artificial intelligence (AI) that can create new content, such as text, images, music and videos, has taken the technology segment by storm.
A recent UK-wide study, exploring the use of generative AI among students, shows over 50 per cent of undergraduates rely on AI tools to help with their essays. Likewise, a considerable number of users leverage generative AI capabilities across various fields like education, medicine and use it as a productivity tool at work.
In contrast, some users are reluctant to use the technology due to concerns that primarily centre around its safety. Their concerns seem to be well-founded as a separate study showed how ChatGPT-like large language models (LLMs) can be trained to "go rogue" and behave maliciously.
Still, the technology has a major impact on the information that's floating on the internet. Understandably, some users have expressed strong reservations about the technology citing reports about AI-generated deepfakes, which AI expert Oren Etzioni says could trigger a "tsunami of misinformation" and influence the impending US elections.
Last year, US President Joe Biden issued an all-new executive order on AI, which requires new safety assessments, equity and civil rights guidance. Regrettably, some issues seem to persist.
Insights from Satya Nadella on stopping AI from going rogue
Nadella joined Lester Holt on NBC's Nightly News to discuss the measures in place to prevent AI from playing a key role in spreading misinformation about the forthcoming 2024 presidential election.
Holt kicked off the interview by asking Nadella what measures are in place to protect and guard the upcoming elections from deepfakes and misinformation.
This is not the first time elections have faced misinformation. However, the widespread use of AI presents unique challenges and Nadella acknowledged this during his interview. He went on to highlight the tech giant's experience in countering such threats.
Referring to the unprecedented challenges posed by AI, Nadella detailed effective measures like watermarking, content IDs and deepfake detection, to safeguard the upcoming election. He expressed confidence in existing technology, claiming it is sufficient to combat AI-generated disinformation.
What is Microsoft doing to ensure a seamless electoral process?
Last year, Microsoft Copilot (formerly Bing Chat) was accused of misleading users and voters by generating false information about the upcoming elections. According to researchers, the issue was systemic, as Copilot also provided users with inaccurate information about the election process in Germany and Switzerland.
This doesn't come as a surprise given that some reports claim AI-powered chatbots are getting dumber. Google recently confirmed it will restrict the types of election-related queries for which Bard and Search Generative Experience (SGE) will return responses.
Similarly, Microsoft has shed light on its plan to protect the upcoming election's integrity from AI-generated deepfakes. The Redmond-based tech giant plans to empower voters by providing factual election news on Bing ahead of the poll.
Bing's market share has remained stagnant despite Microsoft spending a lot of money on AI. As a result, the company has been struggling to crack Google's dominance in the category.
Google and Microsoft's Bing have previously faced severe criticism for featuring deepfake pornography among its top results. In fact, the issue got worse recently with fake images of pop star Taylor Swift surfacing across social media. Nadella believes explicit content generated with the help of AI is alarming and terrible.
According to a report by Windows Central, there is a possibility that Swift's deepfakes were generated using Microsoft Designer. Microsoft rolled out a new update that regulates how users interact with the tool.
Now, the Designer tool blocks nudity-based image generation prompts. Aside from this, the newly imposed Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act has been tasked with regulating and preventing such occurrences.
© Copyright IBTimes 2024. All rights reserved.