Samsung bans use of AI tools like ChatGPT over security concerns
Samsung found its employees uploaded sensitive code to ChatGPT earlier this year.
Samsung has banned its employees from using generative AI tools like ChatGPT, Bing AI, and Google Bard. According to a Bloomberg report, the Korean tech giant informed its employees about the new policy via a memo.
Notably, the new policy restricts the use of generative AI tools on Samsung-owned devices and the company's internal networks. Apparently, Samsung believes uploading sensitive code to these AI tools poses a security threat.
With the new policy, Samsung is restricting its employees from using generative AI systems on computers, tablets, and phones owned by the company. Aside from this, the company forbids its staff from the use of these AI services on its internal networks.
Samsung puts a new policy in place
The memo suggests this is a temporary restriction while the company works to "create a secure environment" to use generative AI tools without any risk. The ban comes after the Korean brand found that some of its employees "leaked internal source code by uploading it to ChatGPT."
To recap, a bug in ChatGPT temporarily exposed AI chat histories to other users earlier this year. The bug reportedly revealed the titles of others' conversations without showing their contents. With its new policy, Samsung has joined a myriad of other companies and institutions that have bounded the use of generative AI tools.
For instance, JPMorgan restricted the use of AI tools citing compliance concerns, according to a CNN report. Other banks such as Wells Fargo, Goldman Sachs, Deutsche Bank, Citigroup, and Bank of America have also either banned or limited the use of AI bots. Likewise, New York City schools have prohibited AI bots like ChatGPT over fears of misinformation and cheating.
"Interest in generative AI platforms such as ChatGPT has been growing internally and externally," Samsung pointed out in the memo. While these platforms are highly efficient and useful, the company acknowledges that there are growing concerns about the security risks that generative AI presents.
Security risks presented by generative AI platforms
It is worth noting that the data transmitted to AI platforms like Bing AI, ChatGPT, and Google Bard is stored on external servers. As a result, retrieving or deleting this data is quite challenging. Moreover, other users are likely to get access to this data.
Generative AI tools rose to popularity in November 2022 when Microsoft-backed AI company OpenAI launched ChatGPT. As expected, the chatbot service took the technology segment by storm.
ChatGPT is the biggest risk factor, according to folks at The Verge. Still, OpenAI's chatbot continues to garner popularity among users as a tool for entertainment and serious work. The AI tool comes in handy for tasks like writing responses to emails and summarising reports. However, that means OpenAI might have access to this sensitive information too.
Moreover, cybercriminals are using AI bots like ChatGPT to craft genuine-looking phishing emails. So, the privacy risks involved when it comes to using ChatGPT are based on how you access the service. OpenAI's support team can't access a company's conversation with the chatbot if the company is ChatGPT's API.
However, the text entered into the general web interface with default settings can be used to train OpenAI's models. According to the AI company's official website, it reviews users' conversations with the AI bot to improve its systems. OpenAI also reviews these conversations to ensure that they comply with its policies and safety requirements.
Ironically, a NewsGuard report recently revealed that anonymous sources are using ChatGPT-like AI tools to create fake news websites that spread misinformation. So, the company understandably recommends users avoid sharing "any sensitive information in your conversations." It also notes that the conversations may be used to train newer versions of ChatGPT.
© Copyright IBTimes 2024. All rights reserved.