3 Sources
3 Sources
[1]
AI Can Spot Lies, But Not as Well as Humans Can - Neuroscience News
Summary: A large-scale study tested whether AI personas can detect when humans are lying -- and found that while AI can sometimes spot deception, it's still far from trustworthy. Across 12 experiments involving 19,000 AI participants, the systems performed inconsistently, showing a strong bias toward identifying lies rather than truths. In some cases, AI matched human accuracy, but in others, it failed to distinguish honest statements. The findings suggest that while AI can mimic human judgment, it lacks the emotional and contextual depth required to make reliable decisions about honesty. Can an AI persona detect when a human is lying - and should we trust it if it can? Artificial intelligence, or AI, has had many recent advances and continues t evolve in scope and capability. A new Michigan State University-led study is diving deeper into how well AI can understand humans by using it to detect human deception. In the study, published in the Journal of Communication, researchers from MSU and the University of Oklahoma conducted 12 experiments with over 19,000 AI participants to examine how well AI personas were able to detect deception and truth from human subjects. "This research aims to understand how well AI can aid in deception detection and simulate human data in social scientific research, as well as caution professionals when using large language models for lie detection," said David Markowitz, associate professor of communication in the MSU College of Communication Arts and Sciences and lead author of the study. To evaluate AI in comparison to human deception detection, the researchers pulled from Truth-Default Theory, or TDT. TDT suggests that people are mostly honest most of the time and we are inclined to believe that others are telling us the truth. This theory helped the researchers compare how AI acts to how people act in the same kinds of situations. "Humans have a natural truth bias -- we generally assume others are being honest, regardless of whether they actually are," Markowitz said. "This tendency is thought to be evolutionarily useful, since constantly doubting everyone would take much effort, make everyday life difficult, and be a strain on relationships." To analyze the judgment of AI personas, the researchers used the Viewpoints AI research platform to assign audiovisual or audio-only media of humans for AI to judge. The AI judges were asked to determine if the human subject was lying or telling the truth and provide a rationale. Different variables were evaluated, such as media type (audiovisual or audio-only), contextual background (information or circumstances that help explain why something happens), lie-truth base-rates (proportions of honest and deceptive communication), and the persona of the AI (identities created to act and talk like real people) to see how AI's detection accuracy was impacted. For example, one of the studies found that AI was lie-biased, as AI was much more accurate for lies (85.8%) compared to truths (19.5%). In short interrogation settings, AI's deception accuracy was comparable to humans. However, in a non-interrogation setting (e.g., when evaluating statements about friends), AI displayed a truth-bias, aligning more accurately to human performance. Generally, the results found that AI is more lie-biased and much less accurate than humans. "Our main goal was to see what we could learn about AI by including it as a participant in deception detection experiments. In this study, and with the model we used, AI turned out to be sensitive to context -- but that didn't make it better at spotting lies," said Markowitz. The final findings suggest that AI's results do not match human results or accuracy and that humanness might be an important limit, or boundary condition, for how deception detection theories apply. The study highlights that using AI for detection may seem unbiased, but the industry needs to make significant progress before generative AI can be used for deception detection. "It's easy to see why people might want to use AI to spot lies -- it seems like a high-tech, potentially fair, and possibly unbiased solution. But our research shows that we're not there yet," said Markowitz. "Both researchers and professionals need to make major improvements before AI can truly handle deception detection." The (in)efficacy of AI personas in deception detection experiments Artificial intelligence (AI) has recently been used to aid in deception detection and to simulate human data in social scientific research. Thus, it is important to consider how well these tools can inform both enterprises. We report 12 studies, accessed through the Viewpoints.ai research platform, where AI (gemini-1.5-flash) made veracity judgments of humans. We systematically varied the nature and duration of the communication, modality, truth-lie base rate, and AI persona. AI performed best (57.7%) when detecting truths and lies involving feelings about friends, although it was notably truth-biased (71.7%). However, in assessing cheating interrogations, AI was lie-biased by judging more than three-quarters of interviewees as cheating liars. In assessing interviews where humans perform at rates over 70%, accuracy plummeted to 15.9% with an ecological base-rate. AI yielded results different from prior human studies and therefore, we caution using certain large language models for lie detection.
[2]
MSU study dives deeper into how well AI can detect human deception
Michigan State UniversityNov 4 2025 Can an AI persona detect when a human is lying - and should we trust it if it can? Artificial intelligence, or AI, has had many recent advances and continues t evolve in scope and capability. A new Michigan State University-led study is diving deeper into how well AI can understand humans by using it to detect human deception. In the study, published in the Journal of Communication, researchers from MSU and the University of Oklahoma conducted 12 experiments with over 19,000 AI participants to examine how well AI personas were able to detect deception and truth from human subjects. This research aims to understand how well AI can aid in deception detection and simulate human data in social scientific research, as well as caution professionals when using large language models for lie detection." David Markowitz, associate professor of communication in the MSU College of Communication Arts and Sciences and lead author of the study To evaluate AI in comparison to human deception detection, the researchers pulled from Truth-Default Theory, or TDT. TDT suggests that people are mostly honest most of the time and we are inclined to believe that others are telling us the truth. This theory helped the researchers compare how AI acts to how people act in the same kinds of situations. "Humans have a natural truth bias - we generally assume others are being honest, regardless of whether they actually are," Markowitz said. "This tendency is thought to be evolutionarily useful, since constantly doubting everyone would take much effort, make everyday life difficult, and be a strain on relationships." To analyze the judgment of AI personas, the researchers used the Viewpoints AI research platform to assign audiovisual or audio-only media of humans for AI to judge. The AI judges were asked to determine if the human subject was lying or telling the truth and provide a rationale. Different variables were evaluated, such as media type (audiovisual or audio-only), contextual background (information or circumstances that help explain why something happens), lie-truth base-rates (proportions of honest and deceptive communication), and the persona of the AI (identities created to act and talk like real people) to see how AI's detection accuracy was impacted. For example, one of the studies found that AI was lie-biased, as AI was much more accurate for lies (85.8%) compared to truths (19.5%). In short interrogation settings, AI's deception accuracy was comparable to humans. However, in a non-interrogation setting (e.g., when evaluating statements about friends), AI displayed a truth-bias, aligning more accurately to human performance. Generally, the results found that AI is more lie-biased and much less accurate than humans. "Our main goal was to see what we could learn about AI by including it as a participant in deception detection experiments. In this study, and with the model we used, AI turned out to be sensitive to context - but that didn't make it better at spotting lies," said Markowitz. The final findings suggest that AI's results do not match human results or accuracy and that humanness might be an important limit, or boundary condition, for how deception detection theories apply. The study highlights that using AI for detection may seem unbiased, but the industry needs to make significant progress before generative AI can be used for deception detection. "It's easy to see why people might want to use AI to spot lies - it seems like a high-tech, potentially fair, and possibly unbiased solution. But our research shows that we're not there yet," said Markowitz. "Both researchers and professionals need to make major improvements before AI can truly handle deception detection." Michigan State University Journal reference: Markowitz, D. M., & Levine, T. R. (2025). The (in)efficacy of AI personas in deception detection experiments. Journal of Communication. doi.org/10.1093/joc/jqaf034
[3]
How AI personas could be used to detect human deception
Can an AI persona detect when a human is lying -- and should we trust it if it can? Artificial intelligence, or AI, has had many recent advances and continues to evolve in scope and capability. A new Michigan State University-led study is diving deeper into how well AI can understand humans by using it to detect human deception. In the study, published in the Journal of Communication, researchers from MSU and the University of Oklahoma conducted 12 experiments with over 19,000 AI participants to examine how well AI personas were able to detect deception and truth from human subjects. "This research aims to understand how well AI can aid in deception detection and simulate human data in social scientific research, as well as caution professionals when using large language models for lie detection," said David Markowitz, associate professor of communication in the MSU College of Communication Arts and Sciences and lead author of the study. To evaluate AI in comparison to human deception detection, the researchers pulled from Truth-Default Theory (TDT). TDT suggests that people are mostly honest most of the time and we are inclined to believe that others are telling us the truth. This theory helped the researchers compare how AI acts to how people act in the same kinds of situations. "Humans have a natural truth bias -- we generally assume others are being honest, regardless of whether they actually are," Markowitz said. "This tendency is thought to be evolutionarily useful, since constantly doubting everyone would take much effort, make everyday life difficult, and be a strain on relationships." To analyze the judgment of AI personas, the researchers used the Viewpoints AI research platform to assign audiovisual or audio-only media of humans for AI to judge. The AI judges were asked to determine if the human subject was lying or telling the truth and provide a rationale. Different variables were evaluated, such as media type (audiovisual or audio-only), contextual background (information or circumstances that help explain why something happens), lie-truth base-rates (proportions of honest and deceptive communication), and the persona of the AI (identities created to act and talk like real people) to see how AI's detection accuracy was impacted. For example, one of the studies found that AI was lie-biased, as AI was much more accurate for lies (85.8%) compared to truths (19.5%). In short interrogation settings, AI's deception accuracy was comparable to that of humans. However, in a non-interrogation setting (e.g., when evaluating statements about friends), AI displayed a truth-bias, aligning more accurately to human performance. Generally, the results found that AI is more lie-biased and much less accurate than humans. "Our main goal was to see what we could learn about AI by including it as a participant in deception detection experiments. In this study, and with the model we used, AI turned out to be sensitive to context -- but that didn't make it better at spotting lies," said Markowitz. The final findings suggest that AI's results do not match human results or accuracy and that humanness might be an important limit, or boundary condition, for how deception detection theories apply. The study highlights that using AI for detection may seem unbiased, but the industry needs to make significant progress before generative AI can be used for deception detection. "It's easy to see why people might want to use AI to spot lies -- it seems like a high-tech, potentially fair, and possibly unbiased solution. But our research shows that we're not there yet," said Markowitz. "Both researchers and professionals need to make major improvements before AI can truly handle deception detection."
Share
Share
Copy Link
A comprehensive Michigan State University study involving over 19,000 AI participants found that while AI can sometimes detect human deception, it performs inconsistently and shows strong bias toward identifying lies rather than truths, falling short of human accuracy in most scenarios.
A comprehensive new study led by Michigan State University has revealed significant limitations in artificial intelligence's ability to detect human deception, challenging assumptions about AI's potential as a reliable lie detector. The research, published in the Journal of Communication, represents one of the largest investigations into AI deception detection to date
1
.
Source: Neuroscience News
The study, conducted in collaboration with the University of Oklahoma, involved 12 separate experiments with over 19,000 AI participants examining how well AI personas could distinguish between truthful and deceptive statements from human subjects
2
.Researchers utilized the Viewpoints AI research platform to conduct their analysis, using the gemini-1.5-flash model to make veracity judgments about human communication. The AI systems were presented with both audiovisual and audio-only media of humans and asked to determine whether subjects were lying or telling the truth, while also providing rationales for their decisions
3
.
Source: News-Medical
The study systematically varied multiple factors to assess AI performance, including media type, contextual background, lie-truth base rates, and different AI personas. This comprehensive approach allowed researchers to examine how various conditions affected the AI's detection accuracy
1
.To establish a baseline for comparison, researchers grounded their analysis in Truth-Default Theory (TDT), which suggests that humans naturally assume others are being honest most of the time. "Humans have a natural truth bias -- we generally assume others are being honest, regardless of whether they actually are," explained David Markowitz, associate professor of communication at MSU and lead author of the study
2
.This evolutionary tendency serves practical purposes, as constantly doubting everyone would require significant mental effort and strain social relationships. The theory provided a framework for comparing AI behavior to established human patterns in deception detection scenarios
3
.The results revealed dramatic inconsistencies in AI performance across different contexts. In one particularly striking finding, AI demonstrated extreme bias toward identifying lies, achieving 85.8% accuracy when detecting deceptive statements but only 19.5% accuracy when identifying truthful ones
1
.However, the AI's performance varied significantly depending on the setting. In short interrogation scenarios, AI's deception detection accuracy was comparable to human performance. Conversely, in non-interrogation settings, such as evaluating statements about friends, AI displayed a truth-bias that more closely aligned with typical human behavior patterns
2
.Related Stories
While the AI systems demonstrated sensitivity to contextual factors, this awareness did not translate into better overall performance. "AI turned out to be sensitive to context -- but that didn't make it better at spotting lies," Markowitz noted. The AI's best performance reached 57.7% accuracy when detecting truths and lies involving feelings about friends, but overall results remained inconsistent
3
.The research suggests that while AI can mimic certain aspects of human judgment, it lacks the emotional and contextual depth required for reliable deception detection. This limitation appears to stem from fundamental differences in how AI processes information compared to human cognitive and emotional responses
1
.The findings carry significant implications for industries considering AI-powered deception detection tools. Despite the appeal of what might seem like an objective, high-tech solution, the research indicates that current AI technology is not ready for reliable professional use in this domain
2
."It's easy to see why people might want to use AI to spot lies -- it seems like a high-tech, potentially fair, and possibly unbiased solution. But our research shows that we're not there yet," Markowitz cautioned. The study emphasizes that both researchers and professionals need to achieve major improvements before AI can handle deception detection reliably
3
.Summarized by
Navi
[1]
[3]
07 Nov 2024•Technology

16 Jul 2025•Science and Research

02 Apr 2025•Science and Research

1
Business and Economy

2
Business and Economy

3
Business and Economy
