Report by Cyberhaven
2025 AI Adoption and Risk Report
Key Findings
End user engagement with DeepSeek through its web interface surged dramatically following the R1 release, settling at 672.8% growth relative to pre-release baselines by the end of the first seven weeks.
83.8% of enterprise data input into AI tools flows to platforms classified as medium, high, or critical risk.
Mid-level employees use AI tools 3.5 times more frequently than manager-level employees.
39.5% of AI tools have the key risk factor of inadvertent exposure of user interactions and training data.
Claude usage rose 136.1% after version 3.5 launched.
34.4% of AI tools have user data accessible to third parties without adequate controls.
AI usage at work has grown an astounding 61x over the past 24 months.
AI usage at work has increased 4.6x in the past 12 months.
35.9% of AI-generated content flows into email and messaging platforms.
Mid-level software engineers use AI 189% more than their more junior counterparts.
Manufacturing companies saw 20x growth in employee AI adoption.
Only 11% of AI tools assessed qualify for low or very low risk classifications.
Cyberhaven's assessment of over 700 AI tools found that a troubling 71.7% fall into high or critical risk categories.
HR and employee records account for 4.8% of sensitive data going into AI.
Cloud documents receive 18.0% of AI-generated content.
Traditional integrated development environments (IDEs) experienced a 23.7% decline in usage when AI alternatives become available
Llama has consistently accounted for at least 50% of local model development over the past twelve months.
Developer adoption of DeepSeek surged rapidly, reaching 17.7% of AI development activity by February 2025.
DeepSeek usage in developer activity partially subsided by March 2025, settling at 11.0%.
Sales and marketing data constitutes 10.7% of sensitive data going into AI.
Retail firms achieved 24x increase in employee AI adoption.
Professional services have 17.2% employee AI adoption.
R&D materials account for 17.1% of sensitive data going into AI.
Currently, 34.8% of all corporate data that employees input into AI tools is classified as sensitive. This is a substantial increase from 27.4% a year ago and more than triple the 10.7% observed two years ago.
Healthcare has 11.8% employee AI adoption.
Gemini usage increased 171.9% in the seven weeks following its 2.0 release.
Source code is the most common type of sensitive data employees put into AI, accounting for 18.7% of sensitive data.
Financial services have 26.2% employee AI adoption.
Technology companies still lead in AI adoption with 38.9% of employees using AI tools.
Retail organizations have surged to second place of AI adoption with 26.4% of employees now regularly using AI tools.
10.8% of AI-generated material enters source code management systems.
When companies officially deploy specialized AI development environments like Cursor or Cline, usage grows by 400% in the first four months after rollout.
Health records comprise 7.4% of sensitive data going into AI.
Only 16.2% of enterprise data input into AI tools is destined for enterprise-ready, low-risk alternatives.
5.5% of AI-generated outputs appear in IT and security tools.