Report by Cobalt

The State of LLM Security Report

15 FINDINGSPublished Jun 24, 2025
View Original Report →

Key Findings

45% of cybersecurity practitioners expressed concern about near-term operational genAI risks such as inaccurate outputs.

CobaltThe State of LLM Security Report·Jun 24, 2025
AIGen AI

33% of respondents are still not conducting regular security assessments, including penetration testing, for their Large Language Model (LLM) deployments.

CobaltThe State of LLM Security Report·Jun 24, 2025
AIGen AILLMPen testing

68% of cybersecurity practitioners expressed concern about long-term genAI threats like adversarial attacks.

CobaltThe State of LLM Security Report·Jun 24, 2025
AIGen AI

32% of LLM pentest findings are serious

CobaltThe State of LLM Security Report·Jun 24, 2025
AIGen AIPen testing

Overall, 69% of serious findings across all pentest categories are resolved.

CobaltThe State of LLM Security Report·Jun 24, 2025
AIGen AIPen testing

The resolution rate for high-severity vulnerabilities found in LLM pentests falls to just 21%.

CobaltThe State of LLM Security Report·Jun 24, 2025
AIGen AIPen testing

48% of security leaders believe a “strategic pause” is needed to recalibrate defenses against genAI-driven threats.

CobaltThe State of LLM Security Report·Jun 24, 2025
AIGen AI

36% of security leaders and practitioners admit that generative AI (genAI) is moving faster than their teams can manage.

CobaltThe State of LLM Security Report·Jun 24, 2025
AIGen AI

72% of security leaders cite genAI-related attacks as their top IT risk.

CobaltThe State of LLM Security Report·Jun 24, 2025
AIGen AI

36% of security leaders expressed concern about near-term operational genAI risks such as inaccurate outputs.

CobaltThe State of LLM Security Report·Jun 24, 2025
AIGen AI

50% of respondents want more transparency from software suppliers about how they detect and prevent vulnerabilities.

CobaltThe State of LLM Security Report·Jun 24, 2025
AIGen AI

46% of all survey respondents are concerned about sensitive information disclosure due to genAI.

CobaltThe State of LLM Security Report·Jun 24, 2025
AIGen AI

42% of all survey respondents are concerned about genAI model poisoning or theft.

CobaltThe State of LLM Security Report·Jun 24, 2025
AIGen AI

37% of all survey respondents are concerned about genAI training data leakage.

CobaltThe State of LLM Security Report·Jun 24, 2025
AIGen AI

76% of security leaders (C-suite and VP level) are more concerned about long-term genAI threats like adversarial attacks.

CobaltThe State of LLM Security Report·Jun 24, 2025
AIGen AI