AI-generated faces now fool even experts, but five-minute training boosts detection accuracy

Reviewed byNidhi Govil

2 Sources

Share

Artificial intelligence has become so advanced that even elite facial recognition experts can barely distinguish AI-generated faces from real ones. A new study reveals that super recognizers—people in the top 2% for facial recognition abilities—correctly identify only 41% of fake faces, performing worse than random chance. However, researchers found that a brief five-minute training regimen dramatically improves detection accuracy, offering a practical solution to combat deepfakes flooding social media.

Super Recognizers Struggle With AI-Generated Faces

Artificial intelligence has reached a troubling milestone in its ability to create realistic human faces. According to research published in Royal Society Open Science, even super recognizers—individuals who rank in the top 2% for facial recognition abilities—can detect AI-generated faces only 41% of the time, performing no better than random chance

1

. People with typical recognition capabilities fare even worse, correctly identifying only about 30% of fakes

1

. This phenomenon, known as hyperrealism, occurs when individuals are duped into thinking fake faces appear more authentic than genuine human faces.

Source: New York Post

Source: New York Post

Brief Training Regimen Delivers Improved Detection Accuracy

The study, led by Katie Gray, an associate professor in psychology at the University of Reading, offers an encouraging solution to this challenge. Researchers developed a five-minute training regimen that teaches participants to identify AI rendering errors common in synthetic faces

1

. The training highlights specific visual cues such as middle teeth, odd-looking hairline patterns, and unnatural skin texture. After completing this brief session, super recognizers improved their ability to identify AI-generated faces to 64%, while typical recognizers reached 51% accuracy

1

. "I think it was encouraging that our kind of quite short training procedure increased performance in both groups quite a lot," Gray told Live Science

1

.

How People Can Differentiate Between Human and AI-Generated Faces

The training approach focuses on detecting fakes by teaching participants to slow down and scrutinize images more carefully. Trained participants took longer to examine faces—typical recognizers slowed by approximately 1.9 seconds, while super recognizers added 1.2 seconds to their review time

1

. The training emphasizes that fake faces often display more proportional features than real ones and may contain rendering errors that betray their artificial origins. Participants received real-time feedback on 10 test faces during training, followed by a recap of key warning signs to watch for.

Generative Adversarial Networks Drive Deepfake Evolution

Deepfake faces are created using generative adversarial networks, a two-stage algorithm that generates synthetic images based on real-world photographs

1

. The system includes a discriminator that evaluates whether images appear real or fake, with the generator improving through iteration until it produces faces realistic enough to pass scrutiny. This technology has advanced so rapidly that distinguishing AI from reality has become a critical challenge across social media platforms. Recently, TikTok users exposed deepfake doctors spreading unfounded medical advice, demonstrating the nefarious potential of this technology.

Human-in-the-Loop Approaches May Offer Future Solutions

Gray and her team suggest that combining AI detection algorithms with human-in-the-loop approaches could provide the most effective defense against synthetic faces. "To best detect synthetic faces, it may be possible to use AI detection algorithms with a human-in-the-loop approach—where that human is a trained SR [super recognizer]," the authors wrote

1

. The research indicates that super recognizers may rely on additional clues beyond rendering errors to identify AI-generated faces, suggesting their enhanced abilities could be harnessed for broader detection efforts. While the study tested participants immediately after training, making long-term retention unclear, equipping people with tools to spot fakes remains essential as AI continues to flood digital spaces. The challenge extends beyond visual media—researchers recently claimed ChatGPT passed the Turing Test, indicating AI's expanding ability to mimic human behavior across multiple domains.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo