The SAIF is designed to help mitigate risks specific to AI systems like model theft, training data poisoning, and malicious injections. Credit: Jonny Lindner Google has announced the launch of the Secure AI Framework (SAIF), a conceptual framework for securing AI systems. Google, owner of the generative AI chatbot Bard and parent company of AI research lab DeepMind, said a framework across the public and private sectors is essential for making sure that responsible actors safeguard the technology that supports AI advancements so that when AI models are implemented, they’re secure-by-default. Its new framework concept is an important step in that direction, the tech giant claimed. The SAIF is designed to help mitigate risks specific to AI systems like model theft, poisoning of training data, malicious inputs through prompt injection, and the extraction of confidential information in training data. “As AI capabilities become increasingly integrated into products across the world, adhering to a bold and responsible framework will be even more critical,” Google wrote in a blog. The launch comes as the advancement of generative AI and its impact on cybersecurity continues to make the headlines, coming into the focus of both organizations and governments. Concerns about the risks these new technologies could introduce range from the potential issues of sharing sensitive business information with advanced self-learning algorithms to malicious actors using them to significantly enhance attacks. The Open Worldwide Application Security Project (OWASP) recently published the top 10 most critical vulnerabilities seen in large language model (LLM) applications that many generative AI chat interfaces are based upon, highlighting their potential impact, ease of exploitation, and prevalence. Examples of vulnerabilities include prompt injections, data leakage, inadequate sandboxing, and unauthorized code execution. Google’s SAIF built on six AI security principles Google’s SAIF builds on its experience developing cybersecurity models, such as the collaborative Supply-chain Levels for Software Artifacts (SLSA) framework and BeyondCorp, its zero-trust architecture used by many organizations. It is based on six core elements, Google said. These are: Expand strong security foundations to the AI ecosystem including leveraging secure-by-default infrastructure protections. Extend detection and response to bring AI into an organization’s threat universe by monitoring inputs and outputs of generative AI systems to detect anomalies and using threat intelligence to anticipate attacks. Automate defenses to keep pace with existing and new threats to improve the scale and speed of response efforts to security incidents. Harmonize platform level controls to ensure consistent security including extending secure-by-default protections to AI platforms like Vertex AI and Security AI Workbench, and building controls and protections into the software development lifecycle. Adapt controls to adjust mitigations and create faster feedback loops for AI deployment via techniques like reinforcement learning based on incidents and user feedback. Contextualize AI system risks in surrounding business processes including assessments of end-to-end business risks such as data lineage, validation, and operational behavior monitoring for certain types of applications. Google will expand bug bounty programs, incentivize research around AI security Google set out the steps it is and will be taking to advance the framework. These include fostering industry support for SAIF with the announcement of key partners and contributors in the coming months and continued industry engagement to help develop the NIST AI Risk Management Framework and ISO/IEC 42001 AI Management System Standard (the industry’s first AI certification standard). It will also work directly with organizations, including customers and governments, to help them understand how to assess AI security risks and mitigate them. “This includes conducting workshops with practitioners and continuing to publish best practices for deploying AI systems securely,” Google said. Furthermore, Google will share insights from its leading threat intelligence teams like Mandiant and TAG on cyber activity involving AI systems, along with expanding its bug hunters programs (including its Vulnerability Rewards Program) to reward and incentivize research around AI safety and security, it added. Lastly, Google will continue to deliver secure AI offerings with partners like GitLab and Cohesity, and further develop new capabilities to help customers build secure systems. Related content opinion Project 2025 could escalate US cybersecurity risks, endanger more Americans The conservative think tank blueprint for how Donald Trump should govern the US if he wins in November calls for dismantling CISA, among many cyber-related measures. Experts say this would increase cybersecurity risks, undermine critical infrastructu By Cynthia Brumfield 25 Jul 2024 10 mins Government IT Government IT Governance Frameworks news analysis NIST releases expanded 2.0 version of the Cybersecurity Framework The US National Institute of Standards and Technology released the 2.0 version of its Cybersecurity Framework, focusing more on governance and supply chain issues and offering resources to speed the framework’s implementation. By Cynthia Brumfield 27 Feb 2024 6 mins IT Governance Frameworks Supply Chain Critical Infrastructure news CREST, IASME to deliver UK NCSC’s Cyber Incident Exercising scheme CIE scheme aims to help organisations find quality service providers that can advise and support them in practising cyber incident response plans. By Michael Hill 26 Sep 2023 3 mins IT Governance Frameworks Incident Response Data and Information Security news eSentire introduces LLM Gateway to help businesses secure generative AI The open-source framework aims to help security teams improve their governance and monitoring of generative AI and large language models. By Michael Hill 22 Aug 2023 4 mins Generative AI IT Governance Frameworks Security Monitoring Software PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe