2 Sources
2 Sources
[1]
AI is getting better and better at generating faces -- but you can train to spot the fakes
Images of faces generated by artificial intelligence (AI) are so realistic that even "super recognizers" -- an elite group with exceptionally strong facial processing abilities -- are no better than chance at detecting fake faces. People with typical recognition capabilities are worse than chance: more often than not, they think AI-generated faces are real. That's according to research published Nov. 12 in the journal Royal Society Open Science. However, the study also found that receiving just five minutes of training on common AI rendering errors greatly improves individuals' ability to spot the fakes. "I think it was encouraging that our kind of quite short training procedure increased performance in both groups quite a lot," lead study author Katie Gray, an associate professor in psychology at the University of Reading in the U.K., told Live Science. Surprisingly, the training increased accuracy by similar amounts in super recognizers and typical recognizers, Gray said. Because super recognizers are better at spotting fake faces at baseline, this suggests that they are relying on another set of clues, not simply rendering errors, to identify fake faces. Gray hopes that scientists will be able to harness super recognizers' enhanced detection skills to better spot AI-generated images in the future. "To best detect synthetic faces, it may be possible to use AI detection algorithms with a human-in-the-loop approach -- where that human is a trained SR [super recognizer]," the authors wrote in the study. In recent years, there has been an onslaught of AI-generated images online. Deepfake faces are created using a two-stage AI algorithm called generative adversarial networks. First, a fake image is generated based on real-world images, and the resulting image is then scrutinized by a discriminator that determines whether it is real or fake. With iteration, the fake images become realistic enough to get past the discriminator. These algorithms have now improved to such an extent that individuals are often duped into thinking fake faces are more "real" than real faces -- a phenomenon known as "hyperrealism." As a result, researchers are now trying to design training regiments that can improve individuals' abilities to detect AI faces. These trainings point out common rendering errors in AI-generated faces, such as the face having a middle tooth, an odd-looking hairline or unnatural-looking skin texture. They also highlight that fake faces tend to be more proportional than real ones. In theory, so-called super recognizers should be better at spotting fakes than the average person. These super recognizers are individuals who excel in facial perception and recognition tasks, in which they might be shown two photographs of unfamiliar individuals and asked to identify if they are the same person or not. But to date, few studies have examined super recognizers' abilities to detect fake faces, and whether training can improve their performance. To fill this gap, Gray and her team ran a series of online experiments comparing the performance of a group of super recognizers to typical recognizers. The super recognizers were recruited from the Greenwich Face and Voice Recognition Laboratory volunteer database; they had performed in the top 2% of individuals in tasks where they were shown unfamiliar faces and had to remember them. In the first experiment, an image of a face appeared onscreen and was either real or computer-generated. Participants had 10 seconds to decide if the face was real or not. Super recognizers performed no better than if they had randomly guessed, spotting only 41% of AI faces. Typical recognizers correctly identified only about 30% of fakes. Each cohort also differed in how often they thought real faces were fake. This occurred in 39% of cases for super recognizers and in around 46% for typical recognizers. The next experiment was identical, but included a new set of participants who received a five-minute training session in which they were shown examples of errors in AI-generated faces. They were then tested on 10 faces and provided with real-time feedback on their accuracy at detecting fakes. The final stage of the training involved a recap of rendering errors to look out for. The participants then repeated the original task from the first experiment. Training greatly improved detection accuracy, with super recognizers spotting 64% of fake faces and typical recognizers noticing 51%. The rate that each group inaccurately called real faces fake was about the same as the first experiment, with super recognizers and typical recognizers rating real faces as "not real" in 37% and 49% of cases, respectively. Trained participants tended to take longer to scrutinize the images than the untrained participants had -- typical recognizers slowed by about 1.9 seconds and super recognizers did by 1.2 seconds. Gray said this is a key message to anyone who is trying to determine if a face they see is real or fake: slow down and really inspect the features. It is worth noting, however, that the test was conducted immediately after participants completed the training, so it is unclear how long the effect lasts. "The training cannot be considered a lasting, effective intervention, since it was not re-tested," Meike Ramon, a professor of applied data science and expert in face processing at the Bern University of Applied Sciences in Switzerland, wrote in a review of the study conducted before it went to print. And since separate participants were used in the two experiments, we cannot be sure how much training improves an individual's detection skills, Ramon added. That would require testing the same set of people twice, before and after training.
[2]
AI-generated faces now indistinguishable from real deal -- but...
Not only is AI slop taking over the internet, but it's becoming indistinguishable from the real deal. Scientists have found that people can't tell the different between human and AI-generated faces without special training, per a dystopian study published in the journal Royal Society Open Science. "Generative adversarial networks (GANs) can create realistic synthetic faces, which have the potential to be used for nefarious purposes," wrote the researchers. Recently, TikTok users blew the whistle on AI-generated deepfake doctors who were scamming social media users with unfounded medical advice. In fact, these faces from concentrate have become so convincing that people are duped into the thinking the counterfeit countenances are real more than the genuine artifact, Livescience report. To prevent people from being duped, researchers are attempting to design a five-minute training regimen to help users unmask the AI-mposters, according to lead study author Katie Gray, an associate professor in psychology at the University of Reading in the UK. These trainings help people catch glitches in AI-generated faces, such as the face having a middle tooth, a bizarre hairline or unnatural-looking skin texture. These false visages are often more proportional than their bonafide counterparts. The team tested out the technique by running a series of experiments contrasting the performance of a group of typical recognizers and super recognizers -- defined as those who excel at facial recognition tasks. The latter participants, who were sourced from the Greenwich Face and Voice Recognition Laboratory volunteer database, had reportedly ranked in the top 2% of individuals in exams where they had to recall unfamiliar faces. In the first test, organizers displayed a face onscreen and gave participants ten seconds to determine if it was real or fake. Typical recognizers spotted only 30% of fakes while super recognizers caught just 41% -- less than if they'd just randomly guessed. The second experiment was almost identical, except it involved a new group of guinea pigs who had received the aforementioned five-minute training on how to spot errors in AI-generated faces. The test takers were shown 10 faces and evaluated on their AI-detection accuracy in real time, culminating in a review of common rendering mistakes. When they participated in the original experiment, their accuracy had improved with super recognizers IDing 64% of the fugazi faces while their normal counterparts recognized 51%. Trained participants also took longer to examine the faces before giving their answer. "I think it was encouraging that our kind of quite short training procedure increased performance in both groups quite a lot," said Gray. Of course, there are a few caveats to the study, namely that the participants were put to the test immediately after training, so it was unclear how much they would've retained had they waited longer. Nonetheless, equipping people with the tools to distinguish humans from bots is essential in light of the plethora of AI-mpersonators flooding social media. And the tech's chameleonic prowess isn't just visual. Recently, researchers claimed that language bot ChatGPT had passed the Turing Test, meaning it is effectively no longer discernible from its flesh-and-blood brethren.
Share
Share
Copy Link
Artificial intelligence has become so advanced that even elite facial recognition experts can barely distinguish AI-generated faces from real ones. A new study reveals that super recognizers—people in the top 2% for facial recognition abilities—correctly identify only 41% of fake faces, performing worse than random chance. However, researchers found that a brief five-minute training regimen dramatically improves detection accuracy, offering a practical solution to combat deepfakes flooding social media.
Artificial intelligence has reached a troubling milestone in its ability to create realistic human faces. According to research published in Royal Society Open Science, even super recognizers—individuals who rank in the top 2% for facial recognition abilities—can detect AI-generated faces only 41% of the time, performing no better than random chance
1
. People with typical recognition capabilities fare even worse, correctly identifying only about 30% of fakes1
. This phenomenon, known as hyperrealism, occurs when individuals are duped into thinking fake faces appear more authentic than genuine human faces.
Source: New York Post
The study, led by Katie Gray, an associate professor in psychology at the University of Reading, offers an encouraging solution to this challenge. Researchers developed a five-minute training regimen that teaches participants to identify AI rendering errors common in synthetic faces
1
. The training highlights specific visual cues such as middle teeth, odd-looking hairline patterns, and unnatural skin texture. After completing this brief session, super recognizers improved their ability to identify AI-generated faces to 64%, while typical recognizers reached 51% accuracy1
. "I think it was encouraging that our kind of quite short training procedure increased performance in both groups quite a lot," Gray told Live Science1
.The training approach focuses on detecting fakes by teaching participants to slow down and scrutinize images more carefully. Trained participants took longer to examine faces—typical recognizers slowed by approximately 1.9 seconds, while super recognizers added 1.2 seconds to their review time
1
. The training emphasizes that fake faces often display more proportional features than real ones and may contain rendering errors that betray their artificial origins. Participants received real-time feedback on 10 test faces during training, followed by a recap of key warning signs to watch for.Related Stories
Deepfake faces are created using generative adversarial networks, a two-stage algorithm that generates synthetic images based on real-world photographs
1
. The system includes a discriminator that evaluates whether images appear real or fake, with the generator improving through iteration until it produces faces realistic enough to pass scrutiny. This technology has advanced so rapidly that distinguishing AI from reality has become a critical challenge across social media platforms. Recently, TikTok users exposed deepfake doctors spreading unfounded medical advice, demonstrating the nefarious potential of this technology.Gray and her team suggest that combining AI detection algorithms with human-in-the-loop approaches could provide the most effective defense against synthetic faces. "To best detect synthetic faces, it may be possible to use AI detection algorithms with a human-in-the-loop approach—where that human is a trained SR [super recognizer]," the authors wrote
1
. The research indicates that super recognizers may rely on additional clues beyond rendering errors to identify AI-generated faces, suggesting their enhanced abilities could be harnessed for broader detection efforts. While the study tested participants immediately after training, making long-term retention unclear, equipping people with tools to spot fakes remains essential as AI continues to flood digital spaces. The challenge extends beyond visual media—researchers recently claimed ChatGPT passed the Turing Test, indicating AI's expanding ability to mimic human behavior across multiple domains.Summarized by
Navi
[1]
[2]
30 Jul 2025•Technology
19 Feb 2025•Technology

31 May 2025•Technology

1
Technology

2
Policy and Regulation

3
Technology
