4 Sources
4 Sources
[1]
AI Can Spot Lies, But Not as Well as Humans Can - Neuroscience News
Summary: A large-scale study tested whether AI personas can detect when humans are lying -- and found that while AI can sometimes spot deception, it's still far from trustworthy. Across 12 experiments involving 19,000 AI participants, the systems performed inconsistently, showing a strong bias toward identifying lies rather than truths. In some cases, AI matched human accuracy, but in others, it failed to distinguish honest statements. The findings suggest that while AI can mimic human judgment, it lacks the emotional and contextual depth required to make reliable decisions about honesty. Can an AI persona detect when a human is lying - and should we trust it if it can? Artificial intelligence, or AI, has had many recent advances and continues t evolve in scope and capability. A new Michigan State University-led study is diving deeper into how well AI can understand humans by using it to detect human deception. In the study, published in the Journal of Communication, researchers from MSU and the University of Oklahoma conducted 12 experiments with over 19,000 AI participants to examine how well AI personas were able to detect deception and truth from human subjects. "This research aims to understand how well AI can aid in deception detection and simulate human data in social scientific research, as well as caution professionals when using large language models for lie detection," said David Markowitz, associate professor of communication in the MSU College of Communication Arts and Sciences and lead author of the study. To evaluate AI in comparison to human deception detection, the researchers pulled from Truth-Default Theory, or TDT. TDT suggests that people are mostly honest most of the time and we are inclined to believe that others are telling us the truth. This theory helped the researchers compare how AI acts to how people act in the same kinds of situations. "Humans have a natural truth bias -- we generally assume others are being honest, regardless of whether they actually are," Markowitz said. "This tendency is thought to be evolutionarily useful, since constantly doubting everyone would take much effort, make everyday life difficult, and be a strain on relationships." To analyze the judgment of AI personas, the researchers used the Viewpoints AI research platform to assign audiovisual or audio-only media of humans for AI to judge. The AI judges were asked to determine if the human subject was lying or telling the truth and provide a rationale. Different variables were evaluated, such as media type (audiovisual or audio-only), contextual background (information or circumstances that help explain why something happens), lie-truth base-rates (proportions of honest and deceptive communication), and the persona of the AI (identities created to act and talk like real people) to see how AI's detection accuracy was impacted. For example, one of the studies found that AI was lie-biased, as AI was much more accurate for lies (85.8%) compared to truths (19.5%). In short interrogation settings, AI's deception accuracy was comparable to humans. However, in a non-interrogation setting (e.g., when evaluating statements about friends), AI displayed a truth-bias, aligning more accurately to human performance. Generally, the results found that AI is more lie-biased and much less accurate than humans. "Our main goal was to see what we could learn about AI by including it as a participant in deception detection experiments. In this study, and with the model we used, AI turned out to be sensitive to context -- but that didn't make it better at spotting lies," said Markowitz. The final findings suggest that AI's results do not match human results or accuracy and that humanness might be an important limit, or boundary condition, for how deception detection theories apply. The study highlights that using AI for detection may seem unbiased, but the industry needs to make significant progress before generative AI can be used for deception detection. "It's easy to see why people might want to use AI to spot lies -- it seems like a high-tech, potentially fair, and possibly unbiased solution. But our research shows that we're not there yet," said Markowitz. "Both researchers and professionals need to make major improvements before AI can truly handle deception detection." The (in)efficacy of AI personas in deception detection experiments Artificial intelligence (AI) has recently been used to aid in deception detection and to simulate human data in social scientific research. Thus, it is important to consider how well these tools can inform both enterprises. We report 12 studies, accessed through the Viewpoints.ai research platform, where AI (gemini-1.5-flash) made veracity judgments of humans. We systematically varied the nature and duration of the communication, modality, truth-lie base rate, and AI persona. AI performed best (57.7%) when detecting truths and lies involving feelings about friends, although it was notably truth-biased (71.7%). However, in assessing cheating interrogations, AI was lie-biased by judging more than three-quarters of interviewees as cheating liars. In assessing interviews where humans perform at rates over 70%, accuracy plummeted to 15.9% with an ecological base-rate. AI yielded results different from prior human studies and therefore, we caution using certain large language models for lie detection.
[2]
MSU study dives deeper into how well AI can detect human deception
Michigan State UniversityNov 4 2025 Can an AI persona detect when a human is lying - and should we trust it if it can? Artificial intelligence, or AI, has had many recent advances and continues t evolve in scope and capability. A new Michigan State University-led study is diving deeper into how well AI can understand humans by using it to detect human deception. In the study, published in the Journal of Communication, researchers from MSU and the University of Oklahoma conducted 12 experiments with over 19,000 AI participants to examine how well AI personas were able to detect deception and truth from human subjects. This research aims to understand how well AI can aid in deception detection and simulate human data in social scientific research, as well as caution professionals when using large language models for lie detection." David Markowitz, associate professor of communication in the MSU College of Communication Arts and Sciences and lead author of the study To evaluate AI in comparison to human deception detection, the researchers pulled from Truth-Default Theory, or TDT. TDT suggests that people are mostly honest most of the time and we are inclined to believe that others are telling us the truth. This theory helped the researchers compare how AI acts to how people act in the same kinds of situations. "Humans have a natural truth bias - we generally assume others are being honest, regardless of whether they actually are," Markowitz said. "This tendency is thought to be evolutionarily useful, since constantly doubting everyone would take much effort, make everyday life difficult, and be a strain on relationships." To analyze the judgment of AI personas, the researchers used the Viewpoints AI research platform to assign audiovisual or audio-only media of humans for AI to judge. The AI judges were asked to determine if the human subject was lying or telling the truth and provide a rationale. Different variables were evaluated, such as media type (audiovisual or audio-only), contextual background (information or circumstances that help explain why something happens), lie-truth base-rates (proportions of honest and deceptive communication), and the persona of the AI (identities created to act and talk like real people) to see how AI's detection accuracy was impacted. For example, one of the studies found that AI was lie-biased, as AI was much more accurate for lies (85.8%) compared to truths (19.5%). In short interrogation settings, AI's deception accuracy was comparable to humans. However, in a non-interrogation setting (e.g., when evaluating statements about friends), AI displayed a truth-bias, aligning more accurately to human performance. Generally, the results found that AI is more lie-biased and much less accurate than humans. "Our main goal was to see what we could learn about AI by including it as a participant in deception detection experiments. In this study, and with the model we used, AI turned out to be sensitive to context - but that didn't make it better at spotting lies," said Markowitz. The final findings suggest that AI's results do not match human results or accuracy and that humanness might be an important limit, or boundary condition, for how deception detection theories apply. The study highlights that using AI for detection may seem unbiased, but the industry needs to make significant progress before generative AI can be used for deception detection. "It's easy to see why people might want to use AI to spot lies - it seems like a high-tech, potentially fair, and possibly unbiased solution. But our research shows that we're not there yet," said Markowitz. "Both researchers and professionals need to make major improvements before AI can truly handle deception detection." Michigan State University Journal reference: Markowitz, D. M., & Levine, T. R. (2025). The (in)efficacy of AI personas in deception detection experiments. Journal of Communication. doi.org/10.1093/joc/jqaf034
[3]
Can AI tell when you're lying?
A new study is diving deeper into how well artificial intelligence can understand humans by using it to detect human deception. In the study in the Journal of Communication, researchers from Michigan State University and the University of Oklahoma conducted 12 experiments with over 19,000 artificial intelligence (AI) participants to examine how well AI personas were able to detect deception and truth from human subjects. "This research aims to understand how well AI can aid in deception detection and simulate human data in social scientific research, as well as caution professionals when using large language models for lie detection," says David Markowitz, associate professor of communication in the MSU College of Communication Arts and Sciences and lead author of the study. To evaluate AI in comparison to human deception detection, the researchers pulled from Truth-Default Theory, or TDT. TDT suggests that people are mostly honest most of the time and we are inclined to believe that others are telling us the truth. This theory helped the researchers compare how AI acts to how people act in the same kinds of situations. "Humans have a natural truth bias -- we generally assume others are being honest, regardless of whether they actually are," Markowitz says. "This tendency is thought to be evolutionarily useful, since constantly doubting everyone would take much effort, make everyday life difficult, and be a strain on relationships." To analyze the judgment of AI personas, the researchers used the Viewpoints AI research platform to assign audiovisual or audio-only media of humans for AI to judge. The AI judges were asked to determine if the human subject was lying or telling the truth and provide a rationale. Different variables were evaluated, such as media type (audiovisual or audio-only), contextual background (information or circumstances that help explain why something happens), lie-truth base-rates (proportions of honest and deceptive communication), and the persona of the AI (identities created to act and talk like real people) to see how AI's detection accuracy was impacted. For example, one of the studies found that AI was lie-biased, as AI was much more accurate for lies (85.8%) compared to truths (19.5%). In short interrogation settings, AI's deception accuracy was comparable to humans. However, in a non-interrogation setting (e.g., when evaluating statements about friends), AI displayed a truth-bias, aligning more accurately to human performance. Generally, the results found that AI is more lie-biased and much less accurate than humans. "Our main goal was to see what we could learn about AI by including it as a participant in deception detection experiments. In this study, and with the model we used, AI turned out to be sensitive to context -- but that didn't make it better at spotting lies," says Markowitz. The final findings suggest that AI's results do not match human results or accuracy and that humanness might be an important limit, or boundary condition, for how deception detection theories apply. The study highlights that using AI for detection may seem unbiased, but the industry needs to make significant progress before generative AI can be used for deception detection. "It's easy to see why people might want to use AI to spot lies -- it seems like a high-tech, potentially fair, and possibly unbiased solution. But our research shows that we're not there yet," says Markowitz. "Both researchers and professionals need to make major improvements before AI can truly handle deception detection."
[4]
How AI personas could be used to detect human deception
Can an AI persona detect when a human is lying -- and should we trust it if it can? Artificial intelligence, or AI, has had many recent advances and continues to evolve in scope and capability. A new Michigan State University-led study is diving deeper into how well AI can understand humans by using it to detect human deception. In the study, published in the Journal of Communication, researchers from MSU and the University of Oklahoma conducted 12 experiments with over 19,000 AI participants to examine how well AI personas were able to detect deception and truth from human subjects. "This research aims to understand how well AI can aid in deception detection and simulate human data in social scientific research, as well as caution professionals when using large language models for lie detection," said David Markowitz, associate professor of communication in the MSU College of Communication Arts and Sciences and lead author of the study. To evaluate AI in comparison to human deception detection, the researchers pulled from Truth-Default Theory (TDT). TDT suggests that people are mostly honest most of the time and we are inclined to believe that others are telling us the truth. This theory helped the researchers compare how AI acts to how people act in the same kinds of situations. "Humans have a natural truth bias -- we generally assume others are being honest, regardless of whether they actually are," Markowitz said. "This tendency is thought to be evolutionarily useful, since constantly doubting everyone would take much effort, make everyday life difficult, and be a strain on relationships." To analyze the judgment of AI personas, the researchers used the Viewpoints AI research platform to assign audiovisual or audio-only media of humans for AI to judge. The AI judges were asked to determine if the human subject was lying or telling the truth and provide a rationale. Different variables were evaluated, such as media type (audiovisual or audio-only), contextual background (information or circumstances that help explain why something happens), lie-truth base-rates (proportions of honest and deceptive communication), and the persona of the AI (identities created to act and talk like real people) to see how AI's detection accuracy was impacted. For example, one of the studies found that AI was lie-biased, as AI was much more accurate for lies (85.8%) compared to truths (19.5%). In short interrogation settings, AI's deception accuracy was comparable to that of humans. However, in a non-interrogation setting (e.g., when evaluating statements about friends), AI displayed a truth-bias, aligning more accurately to human performance. Generally, the results found that AI is more lie-biased and much less accurate than humans. "Our main goal was to see what we could learn about AI by including it as a participant in deception detection experiments. In this study, and with the model we used, AI turned out to be sensitive to context -- but that didn't make it better at spotting lies," said Markowitz. The final findings suggest that AI's results do not match human results or accuracy and that humanness might be an important limit, or boundary condition, for how deception detection theories apply. The study highlights that using AI for detection may seem unbiased, but the industry needs to make significant progress before generative AI can be used for deception detection. "It's easy to see why people might want to use AI to spot lies -- it seems like a high-tech, potentially fair, and possibly unbiased solution. But our research shows that we're not there yet," said Markowitz. "Both researchers and professionals need to make major improvements before AI can truly handle deception detection."
Share
Share
Copy Link
A comprehensive Michigan State University study reveals that while AI can sometimes detect human deception, it performs inconsistently and shows significant bias, falling short of human accuracy in most scenarios.
A comprehensive study led by Michigan State University has shed new light on artificial intelligence's ability to detect human deception, revealing significant limitations that challenge the technology's readiness for real-world applications. The research, published in the Journal of Communication, represents one of the most extensive examinations of AI's lie detection capabilities to date
1
.
Source: Futurity
Researchers from MSU and the University of Oklahoma conducted 12 separate experiments involving over 19,000 AI participants, systematically testing how well AI personas could distinguish between truthful and deceptive human statements. The study's scope and methodology provide unprecedented insights into the current state of AI-powered deception detection technology
2
.The research team grounded their investigation in Truth-Default Theory (TDT), which suggests that humans are generally honest most of the time and naturally inclined to believe others are telling the truth. This theoretical framework allowed researchers to compare AI behavior with established human behavioral patterns in deception detection scenarios
3
."Humans have a natural truth bias -- we generally assume others are being honest, regardless of whether they actually are," explained David Markowitz, associate professor of communication at MSU and the study's lead author. "This tendency is thought to be evolutionarily useful, since constantly doubting everyone would take much effort, make everyday life difficult, and be a strain on relationships"
4
.Using the Viewpoints AI research platform, researchers presented AI systems with both audiovisual and audio-only media of human subjects. The AI judges were tasked with determining whether humans were lying or telling the truth while providing rationales for their decisions. The study systematically varied multiple factors including media type, contextual background, lie-truth base rates, and AI personas to assess how these variables affected detection accuracy
1
.
Source: Neuroscience News
The experiments revealed striking inconsistencies in AI performance across different contexts. In interrogation-style settings, AI demonstrated a pronounced lie bias, achieving 85.8% accuracy when identifying lies but only 19.5% accuracy when recognizing truths. However, in non-interrogation contexts, such as evaluating statements about friends, AI displayed a truth bias that more closely aligned with human performance patterns
2
.Related Stories
Despite occasional instances where AI matched human performance, the overall results demonstrated that artificial intelligence systems are significantly less accurate than humans at detecting deception. The study found that AI's context sensitivity, while notable, did not translate into superior lie detection capabilities. In fact, the technology's tendency toward lie bias often hindered rather than helped its performance
3
."Our main goal was to see what we could learn about AI by including it as a participant in deception detection experiments," Markowitz noted. "In this study, and with the model we used, AI turned out to be sensitive to context -- but that didn't make it better at spotting lies"
4
.The findings carry significant implications for professionals considering AI-powered deception detection tools. While such technology might appear to offer an objective, high-tech solution to lie detection challenges, the research suggests that current AI systems lack the emotional and contextual depth required for reliable deception detection
1
.The study highlights a critical gap between perception and reality regarding AI capabilities. "It's easy to see why people might want to use AI to spot lies -- it seems like a high-tech, potentially fair, and possibly unbiased solution. But our research shows that we're not there yet," Markowitz cautioned. "Both researchers and professionals need to make major improvements before AI can truly handle deception detection"
2
.Summarized by
Navi
[1]
[3]
[4]
1
Business and Economy

2
Technology

3
Policy and Regulation
