Most UK IT leaders are concerned about malicious use of ChatGPT as research shows how its capabilities can significantly enhance phishing and BEC scams. Most UK IT leaders believe that foreign states are already using the ChatGPT chatbot for malicious purposes against other nations. That’s according to a new study from BlackBerry, which surveyed 500 UK IT decision makers revealing that, while 60% of respondents see ChatGPT as generally being used for “good” purposes, 72% are concerned by its potential to be used for malicious purposes when it comes to cybersecurity. In fact, almost half (48%) predicted that a successful cyberattack will be credited to the technology within the next 12 months. The findings follow recent research which showed how attackers can use ChatGPT to significantly enhance phishing and business email compromise (BEC) scams.UK IT leaders fearful of malicious exploitation of ChatGPT’s capabilitiesChatGPT, OpenAI’s free chatbot based on GPT-3.5, was released on November 30, 2022, and racked up a million users in five days. It is capable of writing emails, essays, code, and phishing emails, if the user knows how to ask. Blackberry’s study found that attackers’ ability to use these capabilities to help craft more believable and legitimate sounding phishing emails is a top concern for 57% of the UK IT leaders surveyed. This was followed by the increase in sophistication of threat attacks (51%) and the ability to accelerate new social engineering attacks (49%).Almost half of UK-based IT directors are concerned by ChatGPT’s potential to be used for spreading misinformation (49%), as well as its capabilities to enable less experienced hackers to improve their technical knowledge (47%). Furthermore, 88% of respondents said that governments have a responsibility to regulate advanced technologies such as ChatGPT. “ChatGPT will likely increase its influence in the cyber industry over time,” commented Shishir Singh, CTO cybersecurity at BlackBerry. “We’ve all seen a lot of hype and scaremongering, but the pulse of the industry remains fairly pragmatic – and for good reason. There are a lot of benefits to be gained from this kind of advanced technology and we’re only beginning to scratch the surface, but we also can’t ignore the ramifications.” ChatGPT can significantly enhance phishing and BEC scamsIn January, researchers with security firm WithSecure demonstrated how the GPT-3 natural language generation model can be used to make social engineering attacks such as phishing or business email compromise (BEC) scams harder to detect and easier to pull off. The study revealed that not only can attackers generate unique variations of the same phishing lure with grammatically correct and human-like written text, but they can build entire email chains to make their emails more convincing and can even generate messages using the writing style of real people based on provided samples of their communications.“The generation of versatile natural-language text from a small amount of input will inevitably interest criminals, especially cybercriminals – if it hasn’t already,” the researchers said in their paper. “Likewise, anyone who uses the web to spread scams, fake news or misinformation in general may have an interest in a tool that creates credible, possibly even compelling, text at super-human speeds.” Related content news F5, Intel team up to boost AI delivery, security F5 and Intel are working together to combine security and traffic-management capabilities from F5’s NGINX Plus suite with Intel’s OpenVINO open-source toolkit for optimizing AI inference and Intel IPU hardware accelerators. By Michael Cooney 29 Aug 2024 1 min Network Security Artificial Intelligence Security news Cisco snaps up AI security player Robust Intelligence Plans call for integrating Robust Intelligence's AI security platform with Cisco Security Cloud to streamline threat protection for AI applications and models and increase visibility into AI traffic. By Ann Bednarz 28 Aug 2024 1 min Mergers and Acquisitions Artificial Intelligence Security brandpost Sponsored by Palo Alto Networks The growing dichotomy of AI-powered code in cloud-native security Unveiling the duality: Harnessing AI's potential while safeguarding cloud-native security By Amol Mathur, SVP & GM of Prisma Cloud, Palo Alto Networks 03 Jun 2024 5 mins Artificial Intelligence Security brandpost Sponsored by Kytec and Cisco Innovating safely: Navigating the intersection of AI, network, and security By Tony Ding 28 May 2024 5 mins Artificial Intelligence PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe