Businesses might be in danger of cybercriminals using ChatGPT
ChatGPT's ability to generate code can be utilised for data breaches, leaving businesses vulnerable to cyberattacks.
Since its recent release in November 2022, ChatGPT has shocked users round the world with its stunning ability to impersonate 'human-like' speech. Only requiring a small amount of user input, allowing it to produce essays, speeches and even scripted code.
Reaching over 100 million users globally in the space of a couple months is a historic achievement. But unfortunately, as a result of this rapid success and groundbreaking AI features, ChatGPT is starting to be utilised in cybercrime.
This is not all that surprising, throughout recent years whenever there has been an advancement in computer related technology, opportunistic cybercriminals have always sought to utilise it for criminal intent.
However, unlike previous advancements in AI technology ChatGPT's ability to mimic human language has made it near impossible for people to spot, which poses an immediate threat to businesses.
How are cybercriminals using ChatGPT to attack businesses?
One of ChatGPT's lesser known features is its ability to produce code. This could allow cyber criminals to hack into businesses and deploy bugs into businesses systems.
Technically ChatGPT has restricted users from using the software to create malware, however a recent report shows that through the use of public online forums, hackers have found a 'work around' these restrictions. Now cybercriminals can use this software to create a "python file stealer that searches for common file types".
The introduction of bugs via ChatGPT could disrupt a business by data breach and network disturbances. Both of which could lead to businesses facing financial losses as a result, as well as subsequent damage to their reputations.
But who bears the responsibility for this misconduct?
The most obvious answer is OpenAI, the global AI tech company who produced ChatGPT - and therefore should bear some of the responsibility for these hacks.
Even though they are a relatively new company, that originally started as a non-profit organisation in 2019, it shouldn't be acceptable to release technology with the potential to infiltrate business databases, without also equipping people with the knowledge to defend themselves from said infiltrations.
It is even harder to defend OpenAI for not taking appropriate action, when you take into consideration their recent partnership with global tech giants Microsoft. Due to the resources now at their disposal as a result of this partnership, it is likely that OpenAI will make security measures a top priority moving forward.
What can businesses do to protect themselves now?
Thankfully, despite ChatGPT's potential to be used for cybercrimes, there are also people who are equipped to defend against these attacks. Such people include software developers, as well as other industry bodies. In order to prevent future potential attacks businesses should consider employing developers, who are already equipped with knowledge of AI software and therefore know how to install safety measures.
In the future ideally people working in businesses vulnerable to these attacks should learn more about AI software, but in the meantime hiring third party developers with AI expertise should hopefully serve as an immediate solution.
Another solution to protect businesses from breaches is user authentication. This can take the form of two-factor authentication, where the user's mobile device provides a limited time password, required to access the site. This is a model used by a lot of companies to provide protection against hackers. Unlike third party hires, this solution depends on the users themselves to provide security.
It is important to note that models such as two-factor authentication will not put an end to data breaches via ChatGPT, but it will make it more inconvenient for hackers.
The more user authentication OpenAI requires will not only make it harder for hackers to access, but will also make it more inconvenient for 'genuine users'. Therefore it's very important for OpenAI to find a balance between security and convenience, or they could face the risk of losing a portion of their users.
Despite the potential for AI breaches, businesses should not worry too much. If history has taught us anything, it's that as internet technology advances so does online protection. Therefore, if there are businesses at risk of AI breaches, it is inevitable that they will eventually become equipped to deal with this threat in the near future.
© Copyright IBTimes 2024. All rights reserved.