Generative AI | News, how-tos, features, reviews, and videos
Your existing cloud security practices, platforms, and tools will only go so far in protecting the organization from threats inherent to the use of AI's large language models.
Risks associated with artificial intelligence have grown with the use of generative AI and companies must first understand their risk to create the best protection plan.
Prompt injection, prompt extraction, new phishing schemes, and poisoned models are the most likely risks organizations face when using large language models.
Businesses are finding more and more compelling reasons to use generative AI, which is making the development of security-focused generative AI policies more critical than ever.
Critical infrastructure and other high-risk organizations will need to do AI risk assessments and adhere to cybersecurity standards.
Businesses are using DLP tools to help secure generative AI and reduce risks of ChatGPT and similar applications.
AI is coming to Windows environments — which can be a big asset when implemented correctly and a security nightmare when it’s not.
Cybersecurity experts have incorporated ChatGPT-like tools into their work, and they use them for tasks from source-code analysis to identifying vulnerabilities.
The inevitability of AI is forcing many cybersecurity leaders to decide if it's friend or foe. Treating it as a teammate may be the ultimate solution, but there are a number of pointed questions CISOs should be asking.
AI seems to be getting embedded in everything these days, and it’s coming to Microsoft Windows. It’s time now to ensure your policies are sufficient to handle the change and — risks — it will bring.