UK NCSC CEO Lindy Cameron reflects on the key cybersecurity challenges the UK faces from rapidly developing AI technologies like generative AI and LLMs. Credit: Jonny Lindner The cybersecurity industry cannot rely on its ability to retrofit security into developing machine learning (ML) and artificial intelligence (AI) technology to prevent security risks introduced by innovations such as generative AI and large language models (LLMs), according to Lindy Cameron, CEO of the UK National Cyber Security Centre (NCSC). Cameron was speaking today in the opening keynote of the Chatham House Cyber 2023 conference where she addressed the key cybersecurity challenges the UK faces from rapidly developing AI technologies like OpenAI’s ChatGPT chatbot.Security has often been a secondary consideration when the pace of technology development is high, but AI developers must predict possible attacks and identify ways to mitigate them, Cameron said. Failure to do so will risk designing vulnerabilities into future AI systems, she warned. “Amid the huge dystopian hype about the impact of AI, I think there is a danger that we miss the real, practical steps that we need to take to secure AI.”UK NCSC focuses on three elements to help secure developing AIBeing secure is an essential pre-requisite for ensuring that AI is safe, ethical, explainable, reliable, and as predictable as possible, Cameron said. “Users need reassurance that machine learning is being deployed securely, without putting personal safety or personal data at risk. In addition to the overarching need for security to be built into AI and ML systems, and for companies profiting from AI to be responsible vendors, the NCSC is focusing on three elements to help with the cybersecurity of AI.” First, the NCSC believes it is essential that organisations using AI need to understand the risks they are running by using it – and how to mitigate them, Cameron stated. “For example, machine learning introduces an entirely new category of attack: adversarial attacks. As machine learning is so heavily reliant on the data used for the training, if that data is manipulated, it creates potential for certain inputs to result in unintended behaviour, which adversaries can then exploit.” LLMs pose entirely different security challenges, Cameron continued. “For example – an organisation’s intellectual property or sensitive data may be at risk if their staff start submitting confidential information into LLM prompts.”As the disruptive power of AI becomes increasingly apparent, CEOs at major companies will be making investment decisions about AI and we need to ensure that security considerations are central to these deliberations, Cameron argued. Second, there is a need to maximise the benefits of AI to the cyber defence community. “AI has the potential to improve cybersecurity by dramatically increasing the timeliness and accuracy of threat detection and response. We [also] need to remember that in addition to helping make our country safer, the AI cybersecurity sector also has huge economic potential.”Third, the cybersecurity sector must understand how adversaries – whether they are hostile states or cybercriminals – are using AI, and how to disrupt them, Cameron said. “We can be in no doubt that our adversaries will be seeking to exploit this new technology to enhance and advance their existing tradecraft.” China is positioning itself to be a world leader in AI and, if successful, we must assume that it will use this to secure a dominant role in global affairs, Cameron added. “LLMs also present a significant opportunity for states and cybercriminals too. They lower barriers to entry for some attacks. For example, they make writing convincing spear-phishing emails much easier for foreign nationals without strong linguistic skills.” Related content news F5, Intel team up to boost AI delivery, security F5 and Intel are working together to combine security and traffic-management capabilities from F5’s NGINX Plus suite with Intel’s OpenVINO open-source toolkit for optimizing AI inference and Intel IPU hardware accelerators. By Michael Cooney 29 Aug 2024 1 min Network Security Artificial Intelligence Security news Cisco snaps up AI security player Robust Intelligence Plans call for integrating Robust Intelligence's AI security platform with Cisco Security Cloud to streamline threat protection for AI applications and models and increase visibility into AI traffic. By Ann Bednarz 28 Aug 2024 1 min Mergers and Acquisitions Artificial Intelligence Security brandpost Sponsored by Palo Alto Networks The growing dichotomy of AI-powered code in cloud-native security Unveiling the duality: Harnessing AI's potential while safeguarding cloud-native security By Amol Mathur, SVP & GM of Prisma Cloud, Palo Alto Networks 03 Jun 2024 5 mins Artificial Intelligence Security brandpost Sponsored by Kytec and Cisco Innovating safely: Navigating the intersection of AI, network, and security By Tony Ding 28 May 2024 5 mins Artificial Intelligence PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe