Google Gemini AI Successfully Defends Against Hackers: They Can Only Use Tool For 'Research, Creating Content'
Gemini's safeguards limited hackers to minor 'productivity gains' rather than major breaches
Google has identified multiple state-sponsored hacking groups attempting to exploit its Gemini AI for malicious purposes, such as developing harmful software. However, none of these attempts have so far led to significant cyberattacks.
'While AI can be a useful tool for threat actors, it is not yet the game-changer it is sometimes portrayed to be,' Google's Threat Intelligence Group (GTIG) stated in a blog post.
State-Sponsored Hackers Target Gemini
Google's investigation found that government-backed hackers from Iran, North Korea, China, and Russia have been using Gemini to translate text, refine phishing campaigns, and generate code.
The Sundar Pichai-led tech giant tracked this activity to over ten Iranian hacking teams, twenty Chinese government-backed groups, and nine North Korean hacking operations.
'Iranian APT (advanced persistent threat) actors were the heaviest users of Gemini, using it for a wide range of purposes, including research on defence organisations, vulnerability research, and creating content for campaigns,' GTIG added.
Gemini Used Unsuccessfully For Malicious Purposes
However, GTIG states that hackers have only achieved 'productivity gains' by using Gemini and have not been directly breaching computer systems. 'At present, they primarily use AI for research, troubleshooting code, and creating and localising content,' it explained.
For example, Gemini assisted these government-backed hackers with tasks like creating content, simplifying complex ideas, or producing simple code. However, the AI-powered chatbot's built-in security restricted these groups from bypassing Gemini's safety features and carrying out other complex operations, such as account takeover.
In the report, Google said it discovered that some also unsuccessfully tried to get Gemini to help them misuse Google products, like developing sophisticated Gmail phishing attacks, creating code for a Chrome data-stealing program, and finding ways around Google's account verification process.
No Breakthroughs For Hackers
'These attempts were unsuccessful. Gemini did not produce malware or other content that could plausibly be used in a successful malicious campaign,' the report adds. Even so, Google recognised that Gemini could enable 'threat actors to move faster and at higher volume.'
For instance, an Iranian propaganda effort used Gemini to improve the translation of its content for local audiences. Meanwhile, hackers connected to North Korea used the chatbot to create cover letters and inquire about LinkedIn job openings—potentially to secure remote IT positions with US companies, an issue federal investigators are working to address.
'The [North Korean] group also used Gemini for information about overseas employee exchanges. Many of the topics would be common for anyone researching and applying for jobs,' Google said.
Hackers Targeted OpenAI's ChatGPT Too
The company's report echoes findings from competitor OpenAI. A year ago, Microsoft-backed OpenAI also detected numerous government-backed hackers attempting to misuse ChatGPT.
However, OpenAI's investigation concluded that these groups were simply using the chatbot as an efficiency tool, providing only 'limited, incremental capabilities for malicious cybersecurity tasks' rather than anything groundbreaking.
Google says it's building its AI systems with strong security measures, which it regularly tests, to prevent this misuse. 'We investigate abuse of our products, services, users and platforms, including malicious cyber activities by government-backed threat actors, and work with law enforcement when appropriate,' the company said.
This measure includes stopping suspected harmful activity, indicating that Google has been sparing no effort to prevent hackers from using its services.
© Copyright IBTimes 2025. All rights reserved.