In January 2024, a finance employee at a multinational company in Hong Kong transferred $25 million to fraudsters after attending a video conference call in which every other participant — including the company's CFO — was a deepfake. The incident was, at the time, the most expensive deepfake fraud ever recorded. By 2026 standards, it is no longer remarkable. The technology has improved, the tools have become cheaper and more accessible, and the gap between what AI can generate and what detection systems can identify has widened significantly.

A new report from the AI Forensics Institute, published this week, documents the current state of the deepfake detection landscape with unusual candor. The report's central finding is stark: the best available commercial detection tools correctly identify AI-generated video content approximately 65% of the time under real-world conditions. The best generation tools produce content that fools human observers approximately 85% of the time. The asymmetry is not new, but it is growing.

Data Visualization

Deepfake Detection Accuracy vs. Generation Quality (2022-2026)

202220232024202520260255075100
  • detection
  • generation
Detection accuracy declining as generation quality improves (% fooling human observers)

The technical reasons for this asymmetry are well understood. Detection systems are trained to identify specific artifacts — inconsistencies in lighting, unnatural blinking patterns, subtle distortions around the hairline — that characterize the output of current generation systems. But generation systems are also trained, in part, to avoid these artifacts. The result is a classic adversarial dynamic: each improvement in detection capability is eventually matched by improvements in generation, and the detection systems are always playing catch-up.

"Detection is fundamentally reactive. You can only detect artifacts that you know to look for, and the generators are constantly producing new ones. We are in a race we cannot win on purely technical grounds."

— Lead researcher, AI Forensics Institute

The implications extend well beyond financial fraud. The 2026 election cycle in multiple countries has been marked by the proliferation of synthetic media depicting candidates saying things they never said. In several cases, deepfake videos were shared millions of times before being identified as synthetic. The identification, when it came, reached a fraction of the original audience. The correction problem — the well-documented tendency for false information to spread faster and further than corrections — is significantly amplified when the false information is video.

Several technology companies have proposed technical solutions to the provenance problem. The C2PA standard, now supported by Adobe, Microsoft, Google, and several major camera manufacturers, embeds cryptographic signatures in content at the point of creation. Content that has been significantly modified after creation will have a broken or absent signature. This approach addresses the provenance problem for content created by compliant devices and software, but it does nothing for content created by non-compliant tools — which includes every current deepfake generation system.

The AI Forensics Institute report recommends a combination of technical, legal, and social approaches. On the technical side, it calls for increased investment in detection research and for detection tools to be made freely available to journalists, election officials, and civil society organizations. On the legal side, it recommends criminal penalties for the use of deepfakes in financial fraud and electoral interference, with civil liability for platforms that knowingly host synthetic media without disclosure. On the social side, it calls for media literacy education that helps people understand the limitations of their intuitive ability to detect synthetic content.

None of these recommendations are new, and the gap between recommendation and implementation remains wide. What has changed is the urgency. The technology is no longer theoretical or experimental — it is deployed, accessible, and being used for harm at scale. The window for getting ahead of the problem, if it ever existed, is closing rapidly.