Elon Musk Kamala Harris
A manipulated video mimicking Kamala Harris, shared by Elon Musk, highlights concerns about AI’s role in spreading misinformation and the need for better regulation.

With the US presidential election just around the corner, concerns about the misuse of artificial intelligence have intensified due to a video that falsely mimics Vice President Kamala Harris's voice.

Portraying Harris as saying things she never actually did, the video falsely depicts Harris saying things she never actually said. The video has ignited a debate about the role of AI in spreading political misinformation and its effect on election integrity.

The video's rapid release and spread highlight the urgent need for more precise guidelines on AI-generated content. However, this doesn't come as a surprise. Last year, a top AI expert predicted misinformation would be a major problem in the upcoming US presidential elections.

The Deepfake Controversy: Consequences And Reactions

The video quickly became controversial and first attracted significant attention when tech billionaire Elon Musk shared it on Friday night on X, his social media platform. Musk's X post, which has amassed over 123 million views, referred to the video as "amazing" using a laughing emoji but did not indicate it was a parody.

This has sparked increased concerns about AI's power to mislead people. The manipulated video, which closely mirrors the visuals of Harris's recent campaign ad, swaps the authentic voice-over for an artificial one that convincingly mimics her voice.

The altered audio says, "I, Kamala Harris, am your Democrat candidate for president because Joe Biden finally exposed his senility." The voice, digitally manipulated to sound like Harris, further claims she was chosen "because I am the ultimate diversity hire" as "both a woman and a person of colour."

As The New York Times reported, X user @MrReaganUSA initially uploaded the video, noting in the post that it was a "parody." The altered version parodies an ad by Harris titled "We Choose Freedom."

Dr. Kate Tepper, a spokesperson for the Harris campaign, condemned the video in an email to The Associated Press, stating, "We believe the American people want the real freedom, opportunity and security Vice President Harris is offering; not the fake, manipulated lies of Elon Musk and Donald Trump."

AI Influence On Election Misinformation

Despite the confidence of top tech figures like Microsoft's CEO Satya Nadella in the ability of current technology to defend US elections against AI-generated deep fakes and misinformation, the recent emergence of the manipulated video involving Harris illustrates the mounting concern about AI-generated media and its capacity to disseminate misinformation.

With advancements in AI technology and its growing accessibility, the ability to create convincing deepfakes and manipulated media has notably increased. Notably, the video's original creator, YouTuber Mr Reagan, has confirmed that it is meant as a parody.

However, when a comment argued that manipulating a voice in an "ad" like this might be illegal, Musk responded, "I checked with renowned world authority, Professor Suggon Deeznutz, and he said parody is legal in America."

Despite calls from some X users to label the post for context, no label was added at the time of writing. X's guidelines prohibit sharing synthetic or manipulated media that might mislead or confuse people but allow for exceptions for memes and satire as long as they do not create "significant confusion about the authenticity of the media."

As the debate over AI's role in spreading misinformation continues, the recent incident with the manipulated video underscores the urgent need for clarity and regulation in the digital age. While platforms like X navigate their policies and users raise concerns, the evolving landscape of technology demands vigilant oversight to ensure that media remains trustworthy and transparent.