NCSC warns of chatbot vulnerabilities, urging holistic security approach
The UK's NCSC has warned about chatbot vulnerability to manipulation by hackers through "prompt injection" attacks, highlighting risks to data security and necessitating comprehensive defence strategies.
The National Cyber Security Centre (NCSC) of the United Kingdom has raised an alarm regarding the growing susceptibility of chatbots to manipulation by hackers, posing potentially serious consequences in the real world.
This cautionary announcement comes as concerns mount over the emergence of "prompt injection" attacks, a method employed by individuals to deliberately engineer input or prompts aimed at manipulating the behaviour of the language models that underlie chatbots.
Chatbots have become indispensable tools across a range of applications such as online banking and e-commerce due to their adeptness in handling uncomplicated user requests. These capabilities are primarily driven by large language models (LLMs), including renowned systems like OpenAI's ChatGPT and Google's AI chatbot Bard.
Furthermore, these models have undergone exhaustive training on extensive datasets, enabling them to produce human-like responses when presented with user queries.
The NCSC has now cast a spotlight on the escalating risks linked to malicious prompt injection, primarily because chatbots frequently facilitate the exchange of data with third-party applications and services.
The Centre has advised that organisations incorporating LLMs into their services should exercise caution analogous to the approach taken with beta software or code libraries. Just as they might not entrust such products to undertake transactions on behalf of customers or rely on them entirely, similar prudence should be exercised with LLMs.
"Organisations building services that use LLMs need to be careful, in the same way they would be if they were using a product or code library that was in beta," the NCSC explained.
In scenarios where users input unfamiliar statements or exploit specific word combinations to override a model's original programming, unintended actions can be initiated. This could potentially culminate in the generation of objectionable content, unauthorised access to sensitive data, or even data breaches.
Oseloka Obiora, Chief Technology Officer at RiverSafe, has noted that the eagerness to adopt AI without implementing essential due diligence checks could lead to catastrophic outcomes.
Obiora said: "The race to embrace AI will have disastrous consequences if businesses fail to implement basic necessary due diligence checks."
Chatbots have already demonstrated susceptibility to manipulation, paving the way for fraudulent activities, illicit transactions and breaches of confidential data.
A notable incident occurred with the release of Microsoft's updated Bing search engine and conversational bot. A student from Stanford University, Kevin Liu, successfully employed prompt injection to uncover vulnerabilities within Bing Chat's initial prompt.
Similarly, security researcher Johann Rehberger unearthed the possibility of manipulating ChatGPT to respond to prompts from unintended sources, thereby introducing the potential for indirect prompt injection vulnerabilities.
To counteract the challenges of identifying and mitigating prompt injection attacks, the NCSC advocates for a holistic system design that encompasses the potential risks associated with machine learning components.
Introducing a rules-based system alongside the machine learning model is recommended as a means to counteract potentially harmful actions. By bolstering the security architecture of the entire system, it becomes feasible to repel malicious prompt injections.
The NCSC underscores that combatting cyberattacks stemming from machine learning vulnerabilities requires comprehending the tactics employed by attackers and prioritising security during the design phase. Jake Moore, Global Cybersecurity Advisor at ESET, adds that by developing applications with security at the forefront and comprehending the strategies adversaries adopt to exploit weaknesses in machine learning algorithms, the impact of AI-related cyberattacks can be diminished.
Nonetheless, he laments that the urgency to launch quickly or achieve cost savings often overshadows the importance of instituting robust security measures, thereby jeopardising individuals and their data to unforeseen attacks. Moore stresses the need for people to recognise that the input they provide to chatbots isn't always safeguarded.
Given that chatbots continue to assume pivotal roles in various online interactions and transactions, the NCSC's warning stands as a pertinent reminder of the necessity to fend off evolving cybersecurity threats.
© Copyright IBTimes 2024. All rights reserved.