Google warns employees about chatbot security including its own Bard
About 43 per cent of professionals were using AI tools like ChatGPT as of January, according to a Fishbowl survey.
Google has reportedly warned its employees against sharing confidential information with chatbots, including its own Bard. At the same time, the search giant continues to promote its AI bot around the world. Four people familiar with the matter shared this piece of information with Reuters.
Notably, this precautionary measure is part of Google's long-standing policy on safeguarding information. Widely popular human-sounding programs like Bard and ChatGPT use generative AI (artificial intelligence) to interact with users.
In fact, these chatbots rely on generative AI to answer users' prompts. Some researchers found that other AI bots are capable of reproducing the absorbed data during training. There are no prizes for guessing that this flaw could create a leak risk.
Google urges its employees to be cautious while using AI bots
Google parent company Alphabet Inc. also cautioned its engineers about using computer code that can be easily generated by chatbots. Although it helps programmers, Google's AI bot can provide unsuitable code suggestions. The American tech giant also stated that it aims to be transparent when it comes to the limitations of Bard.
Last month, a report indicated that Google is prepping to bring its Bard chatbot to Pixel smartphones and tablets. Reportedly, the AI tool would appear as a home screen widget on these Pixel devices. However, it now looks like the company wants to avoid business harm from its own ChatGPT rival.
Billions of dollars, cloud revenue from new AI programs, and untold advertising are at stake in the company's race against ChatGPT creators, OpenAI and Microsoft. Google warning its employees against using AI bots also represents a security standard for corporations. In line with this, tech giants are warning personnel about using chat programs that are available to the public as well.
A considerable number of businesses including Deutsche Bank and Amazon are restricting the use of AI chatbots. In fact, Korean smartphone giant Samsung banned the use of AI tools like Bard and ChatGPT last month citing security concerns. The list of companies that have set up guardrails on AI tools reportedly also includes Apple.
A survey of about 12,000 respondents done by the networking site Fishbowl revealed that nearly 43 per cent of professionals were using AI tools like ChatGPT as of January. These employees usually do not inform their bosses, according to the survey, which comprised respondents from leading US-based companies.
Google warned Bard testers about giving the AI bot internal information in February, according to a Business Insider report. Now, the company is gearing up to make Bard available in 40 languages and over 180 countries. An earlier report indicated that Google is postponing Bard's EU launch, but the company told Reuters it has already had conversations with Ireland's Data Protection Commission.
Concerns over sensitive information
Aside from this, Google clarified that it is currently addressing regulators' questions. The AI tools are capable of speeding up tasks such as drafting emails and documents. On the downside, this content could contain sensitive data, misinformation, or copyrighted passages from a famous novel.
Google updated a privacy notice on June 1, telling its staff to avoid including "confidential or sensitive information in your Bard conversations." Notably, some companies use their own software to address these concerns. For example, Cloudflare is enabling businesses to restrict some data (that they can label) from leaking externally.
To those unaware, Cloudflare is an American company known for protecting websites from cyberattacks. Likewise, Google and Microsoft are marketing conversational tools for business customers. These steeply-priced tools will not absorb data into public AI models. By default, both Google Bard and ChatGPT save users' conversation history.
However, a user can easily delete their conversation history. Microsoft's consumer chief marketing officer Yusuf Mehdi recently pointed out that it makes sense that companies do not want their employees to use public chatbots for work. "Companies are taking a duly conservative standpoint," Mehdi explained.
Comparing Microsoft's free Bing chatbot with the company's enterprise software, Mehdi further noted that "there, our policies are much more strict." It is unclear whether Microsoft has also banned its staff from entering confidential information into public AI tools. CEO of Cloudflare Matthew Prince compares typing confidential matters into chatbots with "turning a bunch of PhD students loose in all of your private records."
© Copyright IBTimes 2024. All rights reserved.