Britain partners with Microsoft on national deepfake detection framework
- Nikita Silaech
- 10 hours ago
- 2 min read

Britain has announced a new partnership with Microsoft, academic researchers and technical experts to build a national framework for detecting harmful deepfakes online.
The project will evaluate tools that can spot deceptive AI generated audio and video and will be used to set consistent standards for how platforms and companies should assess deepfake detection technologies. The partnership is part of a broader effort to define benchmarks for dealing with harmful synthetic media rather than leaving each platform to design its own rules.
The initiative builds on recent legislation that criminalised the creation of non consensual intimate images in the United Kingdom. Technology minister Liz Kendall said deepfakes are already being used to defraud the public, target women and girls and erode trust in what people see and hear online. To respond, the framework will test detection systems against concrete threat scenarios such as sexual abuse material, impersonation scams and financial fraud, and will map where current tools fail so that law enforcement and regulators have a clearer view of the gaps.
Estimates say that about 8 million deepfake items were shared in 2025, up from roughly 500,000 in 2023, a more than fifteen fold increase in two years. Regulators have also been pushed to act after reports that Elon Musk’s Grok chatbot could generate non consensual sexualised images of adults and minors; the UK communications watchdog and the privacy regulator have both opened investigations into Grok’s role in harmful synthetic content. By tying a major cloud provider into a government led evaluation framework, Britain is positioning this project as a potential reference point for international standards on deepfake detection and platform responsibility.





Comments