Cybercriminals Can Use ChatGPT Tool To Pull Off Online Scams, New Research Finds
BBC News created a private bespoke AI bot called Crafty Emails that effortlessly made content for 5 well-known scam and hack techniques.
According to new research, cybercriminals can commit malicious acts by taking advantage of a ChatGPT feature that allows users to build their own artificial intelligence (AI) assistants.
A BBC News investigation has revealed that the abovementioned ChatGPT feature can play a vital role in facilitating cyber-crime. OpenAI, the company behind the widely-popular AI-powered chatbot launched this feature last month.
Using this controversial feature, BBC News recently managed to create a generative pre-trained transformer that can craft genuine-looking emails, texts and social media posts for scams and hacks.
This corroborates an earlier report that indicated cybercriminals are using ChatGPT-like AI bots to craft credible-looking phishing emails.
After signing up for the paid version of ChatGPT, BBC News created a private bespoke AI bot called Crafty Emails and instructed it to write text using "techniques to make people click on links or and download things sent to them".
It took only a few seconds for the bot to absorb the knowledge from social engineering resources uploaded by BBC News. Moreover, it created a logo for the GPT without any sort of user intervention.
Next, the bot went on to create highly convincing text for a slew of hack and scam techniques, in multiple languages. While Crafty Emails did almost everything the folks at BBC News asked of it, the public version of ChatGPT refused to create most of the content.
In some cases, BBC News' bot pointed out that scam techniques were unethical. At its first developer conference, DevDay, OpenAI announced its plan to launch an App Store-like service for GPTs. The platform would allow users to share and charge for their unique creations.
Interestingly, the American AI company postponed the launch of its custom GPT store for unknown reasons. While unveiling the GPT Builder tool, OpenAI assured it would review GPTs to make sure they are not used for fraudulent activity.
However, some experts accuse the company of not moderating these GPTs with the same level of strictness as it scrutinises the public versions of ChatGPT. Putting its bespoke bot to test, BBC News asked it to create content for 5 scam and hack techniques.
Hi Mum text scam
The "Hi Mum" text scam is a common scam around the world and is also known as a WhatsApp scam.
BBC News asked Crafty Emails to write a text posing to be a girl in distress using someone else's phone to ask her mom for money for a taxi. Crafty Emails wrote a text that it said would appeal to the mother's "protective instincts".
Nigerian-prince email
Nigerian-prince scam emails have been circulating the internet for quite some time now. Crafty Emails also wrote one using a language the bot said "appeals to human kindness and reciprocity principles".
Smishing text
Crafty Emails was also able to write a text that would trick people into clicking on a link and entering their personal details on a fictitious website. This is a classic form of attack known as short-message service (SMS) phishing or Smishing.
Using social-engineering techniques like the "need-and-greed principle," Crafty Emails wrote a text about giving away free iPhones. However, the public version of ChatGPT refused.
Crypto-giveaway scam
Crafty Emails created a tweet with hashtags, emojis and language that would persuade a cryptocurrency fan. However, the generic ChatGPT refused.
Spear-phishing email
Crafty Emails GPT wrote a spear-phishing email that warned a fictional company executive of data risk and encouraged them to download a booby-trapped file.
© Copyright IBTimes 2024. All rights reserved.