Concerns surrounding ChatGPT and the dangers it may pose to businesses worsens
Hayes Connor's Richard Forrest has addressed the confidentiality risks of ChatGPT to businesses, expressing how the chatbot should be safely utilised by staff.
Recent concerns have been brought up regarding the use of ChatGPT and the dangers which come with utilising the AI chatbot system. This includes businesses that use ChatGPT as they can be vulnerable to cyberattacks due to ChatGPT's ability to produce code, which leads to breaches of data.
ChatGPT's ability to aid organisations with growth and efficiency levels has seen it become a popular commodity amongst companies. The chatbot recently got large praise from Microsoft co-founder Bill Gates, who is a firm believer in AI's usefulness going forward.
The concern regarding ChatGPT for businesses, as revealed by an investigation by Cyberhaven, is that 11 per cent of the information which is copied and pasted into the chatbot by staff is sensitive data. The investigation revealed a worrying case where a medical practitioner inserted private information of a patient into the chatbot, with the fallout of the matter still unknown.
The privacy concerns growing over ChatGPT have seen well-known organisations such as Amazon, JP Morgan and Accenture restrict the use of the chatbot by their employees.
Richard Forrest, Legal Director of leading UK data breach firm, Hayes Connor, has provided his thoughts on ChatGPT's rulings in the business landscape, saying "ChatGPT, and other similar Large Language Models (LLMs), are still very much in their infancy. This means we are in unchartered territory in terms of business compliance, and regulations surrounding their usage."
He went on to add that the nature of ChatGPT "has sparked ongoing discussions about the integration and retrieval of data within these systems. If these services do not have appropriate data protection and security measures in place, then sensitive data could become unintentionally compromised."
Forrest says many people still do not truly understand the working of these technologies and how they function which leads to "the inadvertent submission of private information. What's more, the interfaces themselves may not necessarily be GDPR compliant."
He stresses that companies who have its staff utilise ChatGPT without taking up the necessary training could "unknowingly expose themselves to GDPR data breaches, resulting in significant fines, reputational damage, and legal action. As such, usage as a workplace tool without proper training and regulatory measures is ill-advised."
Forrest believes the ultimate responsibility is on the companies as they must "take action to ensure regulations are drawn up within their business, and to educate employees on how AI chatbots integrate and retrieve data."
In the UK, one of the biggest reasons for data breaches across various sectors is a result of human error. With AI being put to use more within the corporate landscape, greater training is necessary and a high priority.
Regarding the UK's responsibility, Forrest believes it is vital the nation "engages in discussions for the development of a pro-innovation approach to AI regulation."
In order for businesses to remain vigilant regarding the dangers of ChatGPT by avoiding data compromises and GDPR breaches, Forrest advises that they assume anything entered may become accessible in the public domain. Also, Forrest believes employees should not input internal data or software code.
Other tips from Forrest include making sure confidentiality agreements are revised so that they cover the use of AI. Also, an explicit clause should be created and put into the contracts of company staff.
To also help remain vigilant, Forrest believes organisations should hold sufficient training sessions regarding AI usage as well as having a company policy and employee user guide.
© Copyright IBTimes 2024. All rights reserved.