The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Wed, 16 Apr, 4:05 PM UTC
2 Sources
[1]
In science communication, realistic avatars may foster more trustworthiness than cartoon-like ones
On TikTok there are exceptional "testimonials" like Nikola Tesla or Marie Curie delivering short science-related messages that have garnered millions of views. This is just one of many examples where AI-generated avatars are used to communicate science -- a strategy that might also have its drawbacks. The generation of images and animations through artificial intelligence is a rapidly growing field, constantly improving in quality. Yet many avatars, though realistic, still present minor flaws -- glitches, delays, inconsistent facial expressions or lip-syncing -- sometimes barely noticeable, but still easily picked up by a human observer. Jasmin Baake, researcher at the Center for Advanced Internet Studies (CAIS), Bochum, Germany, and the other authors of a study in the Journal of Science Communication realized that these avatars could trigger a phenomenon known in cognitive science as the "uncanny valley." The uncanny valley describes a human reaction to humanoid avatars (digital or robotic): when they look hyper-realistic but not quite perfect, they may evoke strong discomfort, while more stylized or cartoonish humanoid figures tend not to. The uncanny valley can provoke outright rejection in viewers, and Baake and colleagues wondered to what extent the human-like characteristics of AI-avatars representing science communicators influence the trustworthiness attributed to them by the viewer. "We wanted to do research on the perception of these avatars and especially on how their degree of realism and their gender might impact the trustworthiness perception of the recipient," explains Baake. The study (conducted in Germany, in German) involved a series of videos featuring AI-generated avatars portraying science communicators -- both male and female. The experimental conditions were four, varying by avatar realism (very high vs. cartoonish style) and gender (male or female). The nearly 500 participants were recruited through a representative online sample in Germany, selected to reflect the population in terms of age, gender, and education. "For the degree of realism, we assumed that, with the uncanny valley hypothesis in mind, the more stylistic avatars, so the ones that looked a bit more cartoonish, would be perceived as more trustworthy," says Baake. "And, based on existing literature on gendered perceptions of science communicators -- which show that male scientists are often associated with greater competence -- combined with concerns that AI-generated avatars may reflect and reinforce such stereotypes due to biased training data, we hypothesized that male avatars would be perceived as more competent, and therefore more trustworthy overall, than female avatars." However, Baake and colleagues were surprised: in their experiments, the realistic avatars were rated more positively than the cartoon-style ones. In particular, the questionnaires given to participants after viewing the videos assessed perceived competence, integrity, and benevolence of the avatars, which together reflect perceived trustworthiness. The more realistic avatars scored slightly higher across all three dimensions. As for gender, the effect was partial: male avatars were perceived as more competent, but no significant differences were found in terms of integrity or benevolence. "With our findings, we could not find a descent into the uncanny valley with a higher degree of realism, at least in our conditions," comments Baake. Additionally, individual factors -- such as viewers' prior AI knowledge and trust in science -- were found to moderate trustworthiness perceptions. According to the study's findings, more realistic, human-like avatars appear to be suitable for communicating scientific content. However, Baake emphasizes that while no uncanny valley effect was found here, future studies should test a broader range of realism levels to investigate whether an intermediate uncanny valley effect might emerge between the two conditions tested so far, and whether people perceive the realism of avatars differently depending on the observer.
[2]
Realistic AI Avatars Boost Trust in Science Communication - Neuroscience News
Summary: AI-generated avatars are increasingly used to deliver science content on platforms like TikTok, raising questions about how their appearance affects viewer trust. A new study tested whether avatar realism and gender influenced perceived trustworthiness among nearly 500 participants. Surprisingly, more realistic avatars were rated as more competent, benevolent, and trustworthy than cartoon-like ones, defying the expected "uncanny valley" effect. While male avatars were seen as slightly more competent, overall trustworthiness was not significantly affected by gender. If you're among the 1.5 billion people worldwide using TikTok, you may have come across exceptional "testimonials" like Nikola Tesla or Marie Curie delivering short science-related messages that have garnered millions of views. This is just one of many examples where AI-generated avatars are used to communicate science -- a strategy that might also have its drawbacks. The generation of images and animations through artificial intelligence is a rapidly growing field, constantly improving in quality. Yet many avatars, though realistic, still present minor flaws -- glitches, delays, inconsistent facial expressions or lip-syncing -- sometimes barely noticeable, but still easily picked up by a human observer. Jasmin Baake, researcher at the Center for Advanced Internet Studies (CAIS), Bochum, Germany, and the other authors of the study realized that these avatars could trigger a phenomenon known in cognitive science as the "uncanny valley." The uncanny valley describes a human reaction to humanoid avatars (digital or robotic): when they look hyper-realistic but not quite perfect, they may evoke strong discomfort, while more stylized or cartoonish humanoid figures tend not to. The uncanny valley can provoke outright rejection in viewers, and Baake and colleagues wondered to what extent the humanlike characteristics of AI-avatars representing science communicators influence the trustworthiness attributed to them by the viewer. "We wanted to do research on the perception of these avatars and especially on how their degree of realism and their gender might impact the trustworthiness perception of the recipient," explains Baake. The study (conducted in Germany, in German) involved a series of videos featuring AI-generated avatars portraying science communicators -- both male and female. The experimental conditions were four, varying by avatar realism (very high vs. cartoonish style) and gender (male or female). The nearly 500 participants were recruited through a representative online sample in Germany, selected to reflect the population in terms of age, gender, and education. "For the degree of realism, we assumed that with the uncanny valley hypothesis in mind, the more stylistic avatars, so the ones that looked a bit more cartoonish, would be perceived as more trustworthy", says Baake. "And, based on existing literature on gendered perceptions of science communicators -- which show that male scientists are often associated with greater competence -- combined with concerns that AI-generated avatars may reflect and reinforce such stereotypes due to biased training data, we hypothesized that male avatars would be perceived as more competent, and therefore more trustworthy overall, than female avatars." However, Baake and colleagues were surprised: in their experiments, the realistic avatars were rated more positively than the cartoon-style ones. In particular, the questionnaires given to participants after viewing the videos assessed perceived competence, integrity, and benevolence of the avatars, which together reflect perceived trustworthiness. The more realistic avatars scored slightly higher across all three dimensions. As for gender, the effect was partial: male avatars were perceived as more competent, but no significant differences were found in terms of integrity or benevolence. "With our findings, we could not find a descent into the uncanny valley with a higher degree of realism, at least in our conditions," comments Baake. Additionally, individual factors -- such as viewers' prior AI knowledge and trust in science -- were found to moderate trustworthiness perceptions. According to the study's findings, more realistic, human-like avatars appear to be suitable for communicating scientific content. However, Baake emphasizes that while no uncanny valley effect was found here, future studies should test a broader range of realism levels to investigate whether an intermediate uncanny valley effect might emerge between the two conditions tested so far, and whether people perceive the realism of avatars differently depending on the observer.
Share
Share
Copy Link
A new study reveals that realistic AI-generated avatars are perceived as more trustworthy than cartoon-like ones in science communication, challenging the "uncanny valley" hypothesis and shedding light on the impact of avatar gender on perceived competence.
A groundbreaking study conducted by researchers at the Center for Advanced Internet Studies (CAIS) in Bochum, Germany, has revealed surprising insights into the use of AI-generated avatars for science communication. The research, published in the Journal of Science Communication, challenges previous assumptions about the "uncanny valley" effect and sheds light on how avatar realism and gender influence perceived trustworthiness 1.
The study, led by Jasmin Baake, involved nearly 500 participants from a representative online sample in Germany. Participants were shown a series of videos featuring AI-generated avatars portraying science communicators, with four experimental conditions varying in avatar realism (highly realistic vs. cartoonish) and gender (male or female) 2.
Contrary to the researchers' initial hypothesis based on the uncanny valley theory, the study found that more realistic avatars were rated more positively than cartoon-style ones. Participants assessed the avatars' perceived competence, integrity, and benevolence – all components of trustworthiness 1.
Key findings include:
The study's results suggest that more realistic, human-like AI avatars may be more suitable for communicating scientific content. This finding has significant implications for the growing use of AI-generated avatars in science communication, particularly on platforms like TikTok where such content has garnered millions of views 1.
The researchers noted that individual factors, such as viewers' prior AI knowledge and trust in science, moderated trustworthiness perceptions. Baake emphasized the need for future studies to explore a broader range of realism levels and investigate potential intermediate uncanny valley effects 2.
While the study found that male avatars were perceived as more competent, it's important to note that this reflects existing stereotypes in science communication. The researchers highlight the need to address potential biases in AI-generated avatars, which may reinforce such stereotypes due to biased training data 1.
As AI-generated content continues to evolve and improve in quality, this research provides valuable insights for science communicators, content creators, and AI developers. The findings suggest that investing in highly realistic AI avatars may enhance the effectiveness of science communication, while also highlighting the ongoing need to address gender biases in both human and AI-mediated scientific discourse.
A new study reveals that AI-generated summaries of scientific papers can improve public comprehension and enhance trust in scientists, potentially addressing the decline in scientific literacy and trust.
3 Sources
3 Sources
New research from the University of Kansas reveals that readers' trust in news decreases when they believe AI is involved in its production, even when they don't fully understand the extent of AI's contribution.
3 Sources
3 Sources
A new study by New York Institute of Technology researchers shows that consumers view AI-generated emotional marketing content as less authentic, potentially harming brand perception and customer relationships.
3 Sources
3 Sources
A new study reveals that trust in AI for financial advice depends on factors such as gender, political affiliation, and prior AI knowledge, highlighting the challenges in integrating AI into the financial sector.
3 Sources
3 Sources
A new report reveals how news audiences and journalists feel about the use of generative AI in newsrooms, highlighting concerns about transparency, accuracy, and ethical implications.
3 Sources
3 Sources