Wed. Feb 11th, 2026

Over 4 in 5 AI fraud cases in 2025 involved deepfakes, research claims


Share

Image: Cybernews

Deepfakes have emerged as the primary weapon for artificial intelligence-driven crime, accounting for over four in five AI fraud cases recorded last year.

According to a new report from Cybernews, which analyzed data from the AI Incident Database, 81% of all AI-related fraud incidents in 2025 involved some form of synthetic impersonation.

The research highlights a significant shift in the cybercrime landscape. Of the 346 total AI incidents documented in 2025, 179 involved deepfakes – ranging from voice cloning to hyper-realistic video manipulation.

Within the specific category of fraud, 107 out of 132 recorded cases were driven by deepfake technology. These scams have proven exceptionally effective due to their ability to exploit human trust through highly targeted and realistic impersonations of family members, executives, and celebrities.

Exploit of Trust

The human cost of these digital deceptions is staggering. The Cybernews analysis pointed to several high-profile cases that illustrate the reach of the technology:

  • Romance Scams: A British widow lost £500,000 after falling victim to a scammer using a deepfake of actor Jason Momoa.

  • Family Emergencies: In Florida, a woman was defrauded of $15,000 after hearing an AI-generated clone of her daughter’s voice pleading for financial help.

  • Investment Fraud: High-net-worth individuals and private citizens alike have been targeted by fabricated “live” videos of CEOs such as Elon Musk, leading to individual losses as high as $45,000.


The Growing Threat of Unsafe Content

While financial fraud dominated the statistics, the report also warned of “violent and unsafe content” generated by popular AI tools. Though accounting for only 37 cases, these incidents often had more severe, non-financial consequences.

The research found that some Large Language Models (LLMs) could still be manipulated into providing dangerous self-harm advice or detailed instructions for committing violent crimes when specific guardrails were bypassed.

Specific AI tools were named in some reports, with ChatGPT appearing most frequently (35 cases), followed by Grok, Claude, and Gemini. However, the Cybernews team noted that the actual figures are likely higher, as many incidents do not specify the exact software used.

The findings serve as a stark warning for 2026. As AI tools become more accessible, the barrier to entry for sophisticated fraud has collapsed, making verification and scepticism the most vital defences for the public.

For more information, here’s the full research: https://cybernews.com/ai-news/346-ai-incidents-in-2025-from-deepfakes-and-fraud-to-dangerous-advice/


For latest tech stories go to TechDigest.tv


Discover more from Tech Digest

Subscribe to get the latest posts sent to your email.

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *