Samsung has reportedly issued a memo prohibiting the use of generative AI systems like ChatGPT to prevent the upload of sensitive company data to external servers. Credit: Martyn Williams Samsung has reportedly banned employee use of generative AI tools like ChatGPT in a bid to stop transmission of sensitive internal data to external servers.The South Korean electronics giant issued a memo to a key division, notifying employees not to use AI tools, according to a report by Bloomberg, which said it reviewed the memo. Bloomberg did not report which division received the memo.In addition, employees using ChatGPT and other AI tools on personal devices were warned to not upload company related data or other information that could compromise the company’s intellectual property. Doing so, the memo said, could result in employment termination. The memo expressed concerns over inputting data such as sensitive code on AI platforms. The worry is that anything that is typed onto an AI tool like ChatGPT will then reside on external servers, which makes retrieving and deleting them very difficult, and also potentially making them accessible by other users. “Interest in generative AI platforms such as ChatGPT has been growing internally and externally,” the memo said. “While this interest focuses on the usefulness and efficiency of these platforms, there are also growing concerns about security risks presented by generative AI.”The memo comes in the wake of a March notification by Microsoft-backed OpenAI, the creator of ChatGPT, that a bug in an open-source library — since fixed — allowed some ChatGPT users to see titles from other active users’ chat history. Samsung’s ban on the tool also comes a month after an internal survey it conducted to understand the security risks associated with AI. About 65% of employees surveyed said ChatGPT posed serious security threats. In addition, in April, Samsung engineers “accidentally leaked internal source code by uploading it to ChatGPT,” according to the memo. The memo did not, however, reveal what the code was, precisely, and did not elaborate on whether the code was simply typed into ChatGPT, or whether it was also inspected by anyone external to Samsung.Lawmakers set to regulate AIFearing the potential of ChatGPT and other AI systems to leak private data and spread false information, regulators have begun to consider restrictions on their use. The European Parliament, for instance, is days away from finalizing an AI Act, and the European Data Protection Board (EDPB) is assembling an AI task force, focusing on ChatGPT, to examine potential AI dangers.Last month, Italy imposed privacy-based restrictions on ChatGPT and temporarily banned its operation in the country. OpenAI agreed to make changes requested by Italian regulators, after which it relaunched the service. Companies that offer AI tools are starting to respond to concerns about privacy and data leakage. OpenAI last month announced that it would allow users to turn off the chat history feature for ChatGPT. The “history disabled” feature means that conversations marked as such won’t be used to train OpenAI’s underlying models, and won’t be displayed in the history sidebar, the comany said.Samsung, meanwhile, is working on internal AI tools for translating and summarizing documents as well as for software development, according to media reports. It’s also working on ways to block the upload of sensitive company information to external services.“HQ is reviewing security measures to create a secure environment for safely using generative AI to enhance employees’ productivity and efficiency,” the memo said. “However, until these measures are prepared, we are temporarily restricting the use of generative AI.” With this move Samsung joins the expanding group of companies that have exercised some form of restriction on this disruptive technology. Among them are Wall Street banks including JPMorgan Chase, Bank of America, and CitiGroup. Related content news Researcher discovers exposed ServiceBridge database Over 31 million documents from the field service management provider were left open to the internet. By Howard Solomon 26 Aug 2024 4 mins Data and Information Security feature Is the vulnerability disclosure process glitched? How CISOs are being left in the dark Better communication and collaboration between researchers and vendors and improved bug reporting mechanisms could help address confusing and sometimes wholly suppressed bug reports. By Cynthia Brumfield 26 Aug 2024 10 mins CSO and CISO Threat and Vulnerability Management Data and Information Security news AWS environments compromised through exposed .env files Attackers collected Amazon Web Services keys and access tokens to various cloud services from environment variables insecurely stored in tens of thousands of web applications. By Lucian Constantin 22 Aug 2024 7 mins Data Breach AWS Lambda Data and Information Security how-to 3 key strategies for mitigating non-human identity risks For every 1,000 human users, most networks have around 10,000 NHIs, and that can be a huge task to manage. Here are 3 fundamental areas to focus on when securing NHIs. By Chris Hughes 22 Aug 2024 6 mins Data and Information Security Identity and Access Management Risk Management PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe