Report by Cobalt
The State of LLM Security Report
Key Findings
45% of cybersecurity practitioners expressed concern about near-term operational genAI risks such as inaccurate outputs.
33% of respondents are still not conducting regular security assessments, including penetration testing, for their Large Language Model (LLM) deployments.
68% of cybersecurity practitioners expressed concern about long-term genAI threats like adversarial attacks.
32% of LLM pentest findings are serious
Overall, 69% of serious findings across all pentest categories are resolved.
The resolution rate for high-severity vulnerabilities found in LLM pentests falls to just 21%.
48% of security leaders believe a “strategic pause” is needed to recalibrate defenses against genAI-driven threats.
36% of security leaders and practitioners admit that generative AI (genAI) is moving faster than their teams can manage.
72% of security leaders cite genAI-related attacks as their top IT risk.
36% of security leaders expressed concern about near-term operational genAI risks such as inaccurate outputs.
50% of respondents want more transparency from software suppliers about how they detect and prevent vulnerabilities.
46% of all survey respondents are concerned about sensitive information disclosure due to genAI.
42% of all survey respondents are concerned about genAI model poisoning or theft.
37% of all survey respondents are concerned about genAI training data leakage.
76% of security leaders (C-suite and VP level) are more concerned about long-term genAI threats like adversarial attacks.