Cybersecurity professionals expressed a wide range of opinions on the pros and cons of generative AI in a new survey from a prominent certification group. Credit: amperespy44 / Shutterstock The wildfire spread of generative AI has already had noticeable effects, both good and bad, on the day-to-day lives of cybersecurity professionals, a study released this week by the non-profit ISC2 group has found. The study – which surveyed more than 1,120 cybersecurity pros, mostly with CISSP certification and working in managerial roles – found a considerable degree of optimism about the role of generative AI in the security realm. More than four in five (82%) said that they would at least “somewhat agree” that AI is likely to improve the efficiency with which they can do their jobs. The respondents also saw wide-ranging potential applications for generative AI in cybersecurity work, the study found. Everything from actively detecting and blocking threats, identifying potential weak points in security, to user behavioral analysis was cited as a potential use case for generative AI. Automating repetitive tasks was also seen as a potentially valuable use for the technology. Will generative AI help hackers more than security pros? There was less consensus, however, as to whether the overall impact of generative AI will be positive from a cybersecurity point of view. Serious concerns around social engineering, deepfakes, and disinformation – along with a slight majority which said that AI could make some parts of their work obsolete – mean that more respondents believe AI could benefit bad actors more than security professionals. “The fact that cybersecurity professionals are pointing to these types of information and deception attacks as the biggest concern is understandably a great worry for organizations, governments and citizens alike in this highly political year,” the study’s authors wrote. Some of the biggest issues cited by respondents, in fact, are less concrete cybersecurity problems than they are general regulatory and ethical concerns. Fifty-nine percent said that the current lack of regulation around generative AI is a real issue, along with 55% who cited privacy issues and 52% who said data poisoning (accidental or otherwise) was a concern. Because of those worries, substantial minorities said that they were blocking employee access to generative AI tools – 12% said their ban was total and 32% said it was partial. Just 29% said that they were allowing generative AI tool access, while a further 27% said they either hadn’t discussed the issue or weren’t sure of their organization’s policy on the matter. Related content feature Women in Cyber Day finds those it celebrates ‘leaving in droves’ A day honoring women’s contributions to the profession brings mixed feelings for those who have persevered through challenging times in a male-dominated — and at times hostile — industry. By Howard Solomon 30 Aug 2024 8 mins Careers IT Leadership feature The CSO guide to top security conferences Tracking postponements, cancellations, and conferences gone virtual — CSO Online’s calendar of upcoming security conferences makes it easy to find the events that matter the most to you. By CSO Staff 30 Aug 2024 8 mins Technology Industry IT Skills Events analysis 4 Fragen vor dem CISO-Job Lesen Sie, mit welchen Fragen CISO-Jobkandidaten rote Flaggen im Rahmen des Bewerbungsprozesses erkennen. By Aimee Chanthadavong 26 Aug 2024 10 mins Careers IT Leadership analysis Auswirkungen auf IT-Fachkräfte: 10 Anzeichen für einen schlechten CSO Sind Mitarbeiter motiviert und werden gefördert, arbeiten sie effektiver, auch in der IT-Security. Schlechte Stimmung kann an den Vorgesetzten liegen. By Chris Dercks 21 Aug 2024 4 mins Careers PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe