Harmonic Security
Reports
All Statistics
15% of all sensitive data uploaded to generative AI tools involves personal or employee data, including identifiers such as names and addresses.
The average enterprise uploaded more than three times as much data to generative AI platforms in Q3 2025, with 4.4GB compared to 1.32GB in Q2 2025.
25% of all sensitive data disclosures involve technical data, with 65% of that consisting of proprietary source code copied into generative AI tools.
12% of all sensitive data exposures originate from personal accounts, including free versions of generative AI tools.
The average organization used 27 distinct AI tools in Q3 2025, down from 23 new tools introduced in Q2 2025.
57% of sensitive data uploaded to generative AI tools is classified as business or legal data, with 35% of that involving contract or policy drafting.
26.4% of all file uploads to generative AI tools contained sensitive data between July and September 2025, an increase from 22% in Q2 2025.
15% of Google Gemini use by employees was via personal accounts.
26.3% of ChatGPT use by employees was via personal accounts.
13.7% of all sensitive prompts analysed in Q2 originated in Microsoft Copilot.
72.6% of all sensitive prompts analysed in Q2 originated in ChatGPT.
1.8% of all sensitive prompts analysed in Q2 originated in Perplexity.
Of these incidents involving Chinese GenAI tools, the exposed data types included: 32.8% involving source code, access credentials, or proprietary algorithms; 18.2% including M&A documents and investment models; 17.8% exposing PII such as customer or employee records; and 14.4% containing internal financial data.
Of analyzed prompts and files submitted to 300 GenAI tools and AI-enabled SaaS applications between April and June, 22% of files (totaling 4,400 files) and 4.37% of prompts (totaling 43,700 prompts) were found to contain sensitive information.
The average enterprise uploaded 1.32GB of files (half of which were PDFs) to GenAI tools and AI-enabled SaaS applications in Q2. A full 21.86% of these files contained sensitive data.
Code leakage was the most common type of sensitive data sent to GenAI tools.
7.95% of employees in the average enterprise used a Chinese GenAI tool.
535 separate incidents of sensitive exposure were recorded involving Chinese GenAI tools.
Sensitive data in files sent to GenAI tools showed a disproportionate concentration of sensitive and strategic content compared to prompt data, with files being the source of 79.7% of all stored credit card exposures, 75.3% of customer profile leaks, 68.8% of employee PII incidents, and ◦ 52.6% of total exposure volume in financial projections.
47.42% of sensitive employee uploads to Perplexity were from users with standard (non-enterprise) accounts.
In Q2, the average enterprise saw 23 previously unknown GenAI tools newly used by their employees.
5.0% of all sensitive prompts analysed in Q2 originated in Google Gemini.
2.5% of all sensitive prompts analysed in Q2 originated in Claude.
2.1% of all sensitive prompts analysed in Q2 originated in Poe.
Financial information accounted for 14.4% of sensitive data exposed through employee use of Chinese GenAI tools at work.
1 in 12 employees, or 7.95%, used at least one Chinese GenAI tool at work.
Among the 1,059 users who engaged with Chinese GenAI tools, there were 535 incidents of sensitive data exposure.
The majority of sensitive data exposure (roughly 85%) due to the use of Chinese GenAI tools occurred via DeepSeek, followed by Moonshot Kimi, Qwen, Baidu Chat and Manus.
Code and development artifacts made up 32.8% of sensitive data exposed through employee use of Chinese GenAI tools at work.
Personally identifiable information (PII) comprised 17.8% of sensitive data exposed through employee use of Chinese GenAI tools at work.
Customer data represented 12.0% of sensitive data exposed through employee use of Chinese GenAI tools at work.
Mergers & acquisitions data accounted for 18.2% of sensitive data exposed through employee use of Chinese GenAI tools at work.
Organisations that implement light-touch guardrails and nudges, rather than blanket blocking of Chinese GenAI tools, have seen up to a 72% reduction in sensitive data exposure, while increasing AI adoption by as much as 300%.
Legal documents made up 4.9% of sensitive data exposed through employee use of Chinese GenAI tools at work.
63.8% of ChatGPT users used the free tier, with 53.5% of sensitive prompts entered into it.
When asked if they agree with the statement "We aren't sure if any employees are currently accessing GenAI sites today or what they are doing on these sites," 42% of organizations surveyed said they strongly agree, 40% said they agree, 7% said they neither agree nor disagree, 5% said they disagree, 5% said they strongly disagree.
8.5% GenAI prompts contain sensitive information.
When asked if they agree with the statement "My organization has blocked/is blocking access to one or several GenAI sites," 44% of organizations surveyed said they strongly agree, 42% said they agree, 6% said they neither agree nor disagree, 5% said they disagree, 2% said they strongly disagree.
5.64% of sensitive data input into GenAI tools was sensitive code, like Access Keys and proprietary source code.
45.77% of sensitive data input into GenAI tools was customer data, such as billing information, customer reports, and customer authentication data.
When asked if they agree with the statement "We are concerned about data leakage as employees increasingly use GenAI tools," 43% of organizations surveyed they strongly agree, 39% said they agree, 10% said they neither agree nor disagree, 5% said they disagree, and 3% said they strongly disagree.
When asked if they agree with the statement "We aren't sure if any employees are currently accessing GenAI sites today or what they are doing on these sites," 42% of organizations surveyed said they strongly agree, 40% said they agree, 7% said they neither agree nor disagree, 5% said they disagree, 5% said they strongly disagree.
When asked if they agree with the statement "My organization has blocked/is blocking access to one or several GenAI sites," 44% of organizations surveyed said they strongly agree, 42% said they agree, 6% said they neither agree nor disagree, 5% said they disagree, 2% said they strongly disagree.
When asked if they agree with the statement "We are concerned about data leakage as employees increasingly use GenAI tools," 43% of organizations surveyed they strongly agree, 39% said they agree, 10% said they neither agree nor disagree, 5% said they disagree, and 3% said they strongly disagree.
26.83% of sensitive data input into GenAI tools was employee data, including payroll data, PII, and employment records.
14.88% of sensitive data input into GenAI tools was legal and finance data, such as information on Sales Pipeline Data, Investment Portfolio Data, and Mergers and Acquisitions.
6.88% of sensitive data input into GenAI tools was security policies and reports.