Sat. Feb 7th, 2026

UK government announces deepfake detection initiative with Microsoft


Share


The UK government has announced a collaboration with Microsoft and top academics to build a robust defence against the skyrocketing threat of deepfakes.

This new initiative centres on developing a standardized evaluation framework designed to identify critical gaps in deepfake detection.

By testing current technologies against real-world threats – including fraud, impersonation, and non-consensual sexual abuse – the government aims to establish clear benchmarks for the tech industry to meet.

The urgency of the project is underscored by staggering growth in synthetic media. Official figures reveal that an estimated eight million deepfakes were shared in 2025 alone, a massive jump from just 500,000 two years prior.

Criminals are increasingly using these AI-generated images and audio to defraud the public, often targeting vulnerable individuals with sophisticated scams.

Beyond individual fraud, the initiative seeks to protect national security and public trust. Last week, the Home Office funded a “Deepfake Detection Challenge” hosted by Microsoft, where over 350 experts from INTERPOL and the “Five Eyes” intelligence community were tasked with identifying manipulated media in high-pressure scenarios involving election security and organized crime.

“Deepfakes are being weaponised by criminals to defraud the public, exploit women and girls, and undermine trust in what we see and hear,” says Tech Secretary Liz Kendall. “The UK is leading the global fight against deepfake abuse, and those who seek to deceive and harm others will have nowhere to hide.”

Consumer advocates have welcomed the move but are calling for faster regulatory enforcement to protect people from financial ruin. Adds Rocio Concha, Which? Director of Policy and Advocacy:

“The UK is in the grips of a scam epidemic – social media platforms are littered with convincing deepfakes designed to con people into parting with their hard-earned cash.

“Under the Online Safety Act, platforms have duties to detect and remove fraudulent content, including deepfake scams, and the government’s plan to develop a standard for identifying deepfakes could help them do this.

“For this new initiative to work, Ofcom should not hesitate to take action – including robust fines – against companies who aren’t playing their part. Many deepfakes feature in paid-for scam ads.”

The framework is part of a broader legislative push that includes criminalizing the creation of non-consensual intimate deepfakes and banning the “nudification” tools that facilitate such abuse.


For latest tech stories go to TechDigest.tv


Discover more from Tech Digest

Subscribe to get the latest posts sent to your email.

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *