Deepfakes Statistics
Deepfakes by Industry
Latest Statistics
45% of internal audit leaders identify deepfake audio or video impersonation as a leading AI-enabled fraud threat.
77% of organizations have been targeted by deepfake attacks.
51% of organizations have faced sophisticated, personalized phishing emails powered by deepfake technology.
48% say synthetic digital content is a high or critical threat.
30% of security professionals are confident that their CEOs could reliably identify a deepfake.
Deepfake attacks increased by 880% in 2024.
One study finds that 19 out of 20 popular "nudify" apps specialise in the simulated undressing of women.
62% of organizations experience deepfake incidents.
88% of organizations encounter deepfake or impersonation attacks at least occasionally.
45% of IT, cybersecurity, risk, and fraud leaders reported that deepfake or impersonation attacks are frequent occurrences in their organizations.
32% of organizations reported increased incidents related to deepfakes.
43% of consumers in Latin America cite AI-driven fraud, including deepfakes and voice cloning, as an emerging threat reshaping perceptions of safety in digital spaces.
89.4% of Americans expect mobile apps to block AI-powered threats such as bots, deepfakes, impersonation, and account takeovers.
Instances of deepfaked selfies increased by 58% in 2025.
In 2025, deepfakes were linked to 20% of biometric fraud attempts.
10% of Americans lost money after clicking on fake celebrity or influencer endorsements in 2025, with average losses of $525.
72% of Americans have seen fake celebrity or influencer endorsements in 2025.
39% of Americans have clicked on fake celebrity or influencer endorsements in 2025.
Taylor Swift ranks #1 as the most impersonated and exploited celebrity.
Only 29% of Americans feel very confident about spotting deepfakes in 2025.
44% of Americans have seen fake or AI-generated influencer endorsements in 2025.
21% of Americans have low confidence in spotting deepfakes in 2025.
37% of consumers worldwide identified the use of artificial intelligence in sophisticated scams, such as deepfakes, as their top concern in 2025.
61% of cybersecurity professionals in Germany identified deepfakes as the most significant identity-based threat in 2025.
93% of senior legal professionals are concerned that AI-generated fake assets could materially harm their business
One in three businesses worldwide has been impacted by deepfakes and other impersonation attacks.
75% of companies globally report not having a dedicated plan to specifically address generative AI risks, including deepfakes and AI-driven fraud attacks in 2025.
87% of organizations expect deepfakes to become major attack vectors in future ransomware campaigns.
89% of healthcare organizations express concern that deepfake audio and video will become major vectors for social engineering in future ransomware attacks.
90% of C-level executives express concern that deepfake audio and video will become major vectors for social engineering in future ransomware attacks.
53% of organizations implemented AI-powered threat detection.
83% of family offices are concerned about deepfakes or other impersonation threats.
One in five mobile users has been the target of a deepfake scam.
Over 61% of organizations that have lost money in a deepfake attack reported losses in excess of $100,000.
Over 40% of organizations faced three or more deepfake attacks.
Recorded audio/voice manipulations currently account for 52% of deepfake threat vectors.
The percentage of respondents who were 'very concerned' about the threat deepfakes pose to their organizations increased by over 15% from last year.
11.6% of organizations have provided no deepfake-related cybersecurity training.
Static image manipulation attacks represent 59% of deepfake threat vectors.
Nearly 19% of organizations that have lost money in a deepfake attack reported having lost half-a-million dollars or more.