Protect AI has integrated open source LLM Guard into proprietary AI security capabilities after acquiring Laiyer AI. Credit: Marko Aliaksandr / Shutterstock AI and ML security platform Protect AI has integrated a widely used, open source large language model (LLM) security tool — LLM Guard — into existing offerings after acquiring its developer Laiyer AI. Available as a Python package accessible through a preferred installer program (PIP) package manager, LLM Guard is a security toolkit for LLM interactions designed to detect and block data leakage and prompt-based attacks on LLMs. “The acquisition of LLM Guard is part of Protect AI’s mission to provide one integrated platform that enables organizations to enforce one policy that they can invoke at an enterprise level that encompasses all forms of AI security,” said Daryan Dehghanpisheh, president and co-founder of Protect AI. “We are enabling enterprises to build, deploy, and manage AI applications that are not only secure and compliant but also operationally efficient.” Having infused the open source LLM Guard into its existing stack, Protect AI has plans to tool it up for a separate commercial offering by mounting additional features and integrations. Extended protection against prompt injection and data leaks Protect AI’s existing offering centers on protecting an organization’s AI and ML workflows, helping security teams defend against unique AI security threats. LLM Guard is an extension to that offering with a specific focus on LLM-based generative AI (GenAI) workflows. “The acquisition of Laiyer AI and its LLM Guard open source tool extends the Protect AI platform with new capabilities for detecting, redacting, and sanitizing inputs and outputs from LLMs in order to mitigate risks such as prompt injections and personal data leaks,” Dehghanpisheh said. “These features are integral to preserving LLM functionality while safeguarding against malicious attacks and misuse. LLM Guard also integrates seamlessly with existing security workflows and observability SIEM and SOAR tools.” Additionally, LLM Guard is expected to extend Protect AI Radar’s protection capabilities that can be built with a machine learning bill of materials (MLBOM) for detecting and mitigating security threats in the AI supply chain, according to Dehghanpisheh. Protect AI’s Radar is an AI risk detection and mitigation offering. “There’s a clear need in the market for a solution that can secure LLM use cases from start to finish, including when they scale into production. By joining forces with Protect AI, we are extending Protect AI’s products with LLM security capabilities to deliver the industry’s most comprehensive end-to-end AI Security platform,” Laiyer AI co-founders Neal Swaelens and Oleksandr Yaremchuk said in a press statement. LLM Guard to undergo gradual changes Protect AI has assured that it will not enforce any changes in user interaction on LLM Guard, which is presently available as an open source offering and sees 2.5 million monthly downloads on HuggingFace. “We remain committed to open source and permissive use licensing to support customers on their journey to implementing MLSecOps and securing their AI/ML deployments,” Dehghanpisheh said. However, the company plans to scale the tool up with new features and offer a separate version on subscription at a later time. “There will be a commercial version of Laiyer AI’s open source LLM Guard product which will offer expanded features, capabilities, and integrations as part of the Protect AI platform,” Dehghanpisheh added. “We have received extremely positive feedback from our customers and build partners who have seen these new capabilities. We will be announcing them publicly in the future.” GenAI platforms built on LLMs have been fueling a significant rise in cyberattacks and security risks, leading to existing cybersecurity providers as well as new startups rolling out specialized offerings to address these risks. Related content news Action1 says it has decided to remain founder-led After reviewing customer reactions to stories of a potential buyout, Action1 decided it had the potential to stay independent and deliver more. By Shweta Sharma 21 Aug 2024 4 mins Mergers and Acquisitions Security Software feature Custodians looking to beat offenders in gen AI cybersecurity battle The true determinant of success will be how well each side harnesses this powerful tool to outmaneuver the other in the ongoing cybersecurity arms race. By Shweta Sharma 21 Aug 2024 8 mins Generative AI Security Software opinion Who writes the code in your security software? You need to know Trusting but verifying the code in the security software you use may not be an easy task, but it’s a worthwhile endeavor. Here are some recommended actions. By Susan Bradley 19 Aug 2024 7 mins CSO and CISO Windows Security Security Software news Generative AI takes center stage at Black Hat USA 2024 Top gen AI-driven cybersecurity tools, platforms, features, services, and technologies unveiled at Black Hat 2024 that you need to know about. By Shweta Sharma 08 Aug 2024 6 mins Black Hat Generative AI Security Software PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe