The complaint underscores wider privacy concerns and raises the question of whether model users or model creators are responsible for compliance with privacy rules. Credit: Skorzewiak / Shutterstock Meta is facing renewed scrutiny over privacy concerns as the privacy advocacy group NOYB has lodged complaints in 11 countries against the company’s plans to use personal data for training its AI models. NOYB has called on national regulators to take immediate action against Meta in 10 European Union member states and in Norway, arguing that changes to the company’s privacy policy due to enter effect on June 26 would permit the use of extensive personal data, including posts, private images, and tracking information, for training its AI technology. “Unlike the already problematic situation of companies using certain (public) data to train a specific AI system (eg a chatbot), Meta’s new privacy policy basically says that the company wants to take all public and non-public user data that it has collected since 2007 and use it for any undefined type of current and future ‘artificial intelligence technology,’” NOYB said in a statement. “This includes the many ‘dormant’ Facebook accounts users hardly interact with anymore — but which still contain huge amounts of personal data,” the group added. Meta announced the changes last month in an email to Facebook users inviting them to opt out of the changes. It read, in part, “AI at Meta is our collection of generative AI features and experiences, such as Meta AI and AI creative tools, along with the models that power them. … To help bring these experiences to you, we’ll now rely on the legal basis called legitimate interests for using your information to develop and improve AI at meta. This means that you have the right to object to how your information is used for these purposes. If your objection is honoured, it will be applied from then on.” Potential implications for other enterprises The complaint once again highlights serious concerns about privacy and the use of consumer data in developing AI models. For enterprises, this also raises the question of who is responsible for compliance. “The responsibility likely shifts to the entity providing the model, while the user might be exempt from liability,” said Pareekh Jain, CEO of Pareekh Consulting. “Using another company’s model, such as Meta’s or any large enterprise’s, places the responsibility on the creator of the model to use data wisely. Users typically don’t face legal issues.” If stricter privacy laws are implemented, it might mean that only larger companies can afford the legal and privacy challenges, limiting who can produce large language models, Jain added. Smaller companies might find compliance too costly. Enterprises would also need to ensure legal immunity in their contracts, like how OpenAI initially offered legal coverage to users. “As more AI models are developed and more organizations are involved, it’s crucial they include legal safeguards in their operations,” Jain said. “This shifts legal liability to the model provider. While this may slow down innovation, it ensures that companies are also responsible for legal compliance, potentially restricting smaller players from entering the market.” Enterprises will be forced to conduct regular audits of AI models to ensure compliance with data protection laws, said Thomas George, President of Cybermedia Research. “Financially, enterprises should consider setting aside reserves to cover potential compliance-related costs, mitigating the impact of any necessary sudden modifications to AI models,” George said. “Operationally, investing in ongoing training and development for technical teams to stay abreast of the latest compliance requirements will enable more agile adjustments to AI systems when needed.” The user data conundrum for AI companies The use of personal data in AI training is becoming a significant concern in the EU and beyond. Recently, Slack faced backlash over its privacy policies after a user exposed how customer data was being utilized in AI models, emphasizing the need for users to opt out. OpenAI is facing a federal class action lawsuit in California, where it was accused of improperly using personal information for training purposes. Italy’s data protection authority, Garante, has also said that ChatGPT is in violation of EU data privacy standards. Related content news LLMs fueling a “genAI criminal revolution” according to Netcraft report A surge in websites with AI-generated text is expected to continue as threat actors increasingly adopt the technology. And they’re using LLMs for SEO as well, to help them top search pages. By Lynn Greiner 30 Aug 2024 5 mins Phishing Hacking Generative AI feature Custodians looking to beat offenders in gen AI cybersecurity battle The true determinant of success will be how well each side harnesses this powerful tool to outmaneuver the other in the ongoing cybersecurity arms race. By Shweta Sharma 21 Aug 2024 8 mins Generative AI Security Software news Generative AI takes center stage at Black Hat USA 2024 Top gen AI-driven cybersecurity tools, platforms, features, services, and technologies unveiled at Black Hat 2024 that you need to know about. By Shweta Sharma 08 Aug 2024 6 mins Black Hat Generative AI Security Software news Cequence streamlines API security through fresh LLM-specific offerings New capabilities include protection against OWASP top 10 LLM threats, along with other visibility and security offerings. By Shweta Sharma 06 Aug 2024 4 mins Generative AI Security Software APIs PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe