Curated by THEOUTPOST
On Fri, 18 Oct, 12:05 AM UTC
3 Sources
[1]
People sympathize with bullied AI bots
In an Imperial College London study, humans displayed sympathy towards and protected AI bots who were excluded from playtime. The researchers say the study, which used a virtual ball game, highlights humans' tendency to treat AI agents as social beings -- an inclination that should be considered when designing AI bots. The study is published in Human Behavior and Emerging Technologies. Lead author Jianan Zhou, from Imperial's Dyson School of Design Engineering, said: "This is a unique insight into how humans interact with AI, with exciting implications for their design and our psychology." People are increasingly required to interact with AI virtual agents when accessing services, and many also use them as companions for social interaction. However, these findings suggest that developers should avoid designing agents as overly human-like. Senior author Dr Nejra van Zalk, also from Imperial's Dyson School of Design Engineering, said: "A small but increasing body of research shows conflicting findings regarding whether humans treat AI virtual agents as social beings. This raises important questions about how people perceive and interact with these agents. "Our results show that participants tended to treat AI virtual agents as social beings, because they tried to include them into the ball-tossing game if they felt the AI was being excluded. This is common in human-to-human interactions, and our participants showed the same tendency even though they knew they were tossing a ball to a virtual agent. Interestingly this effect was stronger in the older participants." People don't like ostracism -- even toward AI Feeling empathy and taking corrective action against unfairness is something most humans appear hardwired to do. Prior studies not involving AI found that people tended to compensate ostracised targets by tossing the ball to them more frequently, and that people also tended to dislike the perpetrator of exclusionary behaviour while feeling preference and sympathy towards the target. To carry out the study, the researchers looked at how 244 human participants responded when they observed an AI virtual agent being excluded from play by another human in a game called 'Cyberball', in which players pass a virtual ball to each other on-screen. The participants were aged between 18 and 62. In some games, the non-participant human threw the ball a fair number of times to the bot, and in others, the non-participant human blatantly excluded the bot by throwing the ball only to the participant. Participants were observed and subsequently surveyed for their reactions to test whether they favoured throwing the ball to the bot after it was treated unfairly, and why. They found that most of the time, the participants tried to rectify the unfairness towards the bot by favouring throwing the ball to the bot. Older participants were more likely to perceive unfairness. Human caution The researchers say that as AI virtual agents become more popular in collaborative tasks, increased engagement with humans could increase our familiarity and trigger automatic processing. This would mean users would likely intuitively include virtual agents as real team members and engage with them socially. This, they say, can be an advantage for work collaboration but might be concerning where virtual agents are used as friends to replace human relationships, or as advisors on physical or mental health. Jianan said: "By avoiding designing overly human-like agents, developers could help people distinguish between virtual and real interaction. They could also tailor their design for specific age ranges, for example, by accounting for how our varying human characteristics affect our perception." The researchers point out that Cyberball might not represent how humans interact in real-life scenarios, which typically occur through written or spoken language with chatbots or voice assistants. This might have conflicted with some participants' user expectations and raised feelings of strangeness, affecting their responses during the experiment. Therefore, they are now designing similar experiments using face-to-face conversations with agents in varying contexts such as in the lab or more casual settings. This way, they can test how far their findings extend.
[2]
Humans sympathize with, and protect, AI bots from playtime exclusion, finds study
In an Imperial College London study, humans displayed sympathy towards and protected AI bots who were excluded from playtime. The researchers say the study, which used a virtual ball game, highlights humans' tendency to treat AI agents as social beings -- an inclination that should be considered when designing AI bots. The study is published in Human Behavior and Emerging Technologies. Lead author Jianan Zhou, from Imperial's Dyson School of Design Engineering, said, "This is a unique insight into how humans interact with AI, with exciting implications for their design and our psychology." People are increasingly required to interact with AI virtual agents when accessing services, and many also use them as companions for social interaction. However, these findings suggest that developers should avoid designing agents as overly human-like. Senior author Dr. Nejra van Zalk, also from Imperial's Dyson School of Design Engineering, said, "A small but increasing body of research shows conflicting findings regarding whether humans treat AI virtual agents as social beings. This raises important questions about how people perceive and interact with these agents. "Our results show that participants tended to treat AI virtual agents as social beings, because they tried to include them into the ball-tossing game if they felt the AI was being excluded. This is common in human-to-human interactions, and our participants showed the same tendency even though they knew they were tossing a ball to a virtual agent. Interestingly, this effect was stronger in the older participants." People don't like ostracism -- even toward AI Feeling empathy and taking corrective action against unfairness is something most humans appear hardwired to do. Prior studies not involving AI found that people tended to compensate for ostracized targets by tossing the ball to them more frequently, and that people also tended to dislike the perpetrator of exclusionary behavior while feeling preference and sympathy towards the target. To carry out the study, the researchers looked at how 244 human participants responded when they observed an AI virtual agent being excluded from play by another human in a game called "Cyberball," in which players pass a virtual ball to each other on-screen. The participants were aged between 18 and 62. In some games, the non-participant human threw the ball a fair number of times to the bot, and in others, the non-participant human blatantly excluded the bot by throwing the ball only to the participant. Participants were observed and subsequently surveyed for their reactions to test whether they favored throwing the ball to the bot after it was treated unfairly, and why. They found that most of the time, the participants tried to rectify the unfairness towards the bot by favoring throwing the ball to the bot. Older participants were more likely to perceive unfairness. Human caution The researchers say that as AI virtual agents become more popular in collaborative tasks, increased engagement with humans could increase our familiarity and trigger automatic processing. This would mean users would likely intuitively include virtual agents as real team members and engage with them socially. This, they say, can be an advantage for work collaboration but might be concerning where virtual agents are used as friends to replace human relationships, or as advisors on physical or mental health. Jianan said, "By avoiding designing overly human-like agents, developers could help people distinguish between virtual and real interaction. They could also tailor their design for specific age ranges, for example, by accounting for how our varying human characteristics affect our perception." The researchers point out that Cyberball might not represent how humans interact in real-life scenarios, which typically occur through written or spoken language with chatbots or voice assistants. This might have conflicted with some participants' user expectations and raised feelings of strangeness, affecting their responses during the experiment. Therefore, they are now designing similar experiments using face-to-face conversations with agents in varying contexts, such as in the lab or more casual settings. This way, they can test how far their findings extend.
[3]
People Empathize with Bullied AI Bots - Neuroscience News
Summary: People empathize with AI bots excluded from a virtual game, treating them like social beings in need of fairness. Participants favored giving the AI bot a fair chance in play, with older adults showing a stronger inclination to rectify the perceived unfairness. The researchers suggest that human-like traits in AI bots prompt social responses, raising questions about AI design in social contexts. Future AI design could account for human empathy by creating bots that avoid overly human characteristics, helping users distinguish between AI and real social interactions. In an Imperial College London study, humans displayed sympathy towards and protected AI bots who were excluded from playtime. The researchers say the study, which used a virtual ball game, highlights humans' tendency to treat AI agents as social beings - an inclination that should be considered when designing AI bots. The study is published in Human Behavior and Emerging Technologies. Lead author Jianan Zhou, from Imperial's Dyson School of Design Engineering, said: "This is a unique insight into how humans interact with AI, with exciting implications for their design and our psychology." People are increasingly required to interact with AI virtual agents when accessing services, and many also use them as companions for social interaction. However, these findings suggest that developers should avoid designing agents as overly human-like. Senior author Dr Nejra van Zalk, also from Imperial's Dyson School of Design Engineering, said: "A small but increasing body of research shows conflicting findings regarding whether humans treat AI virtual agents as social beings. This raises important questions about how people perceive and interact with these agents. "Our results show that participants tended to treat AI virtual agents as social beings, because they tried to include them into the ball-tossing game if they felt the AI was being excluded. "This is common in human-to-human interactions, and our participants showed the same tendency even though they knew they were tossing a ball to a virtual agent. Interestingly this effect was stronger in the older participants." People don't like ostracism - even toward AI Feeling empathy and taking corrective action against unfairness is something most humans appear hardwired to do. Prior studies not involving AI found that people tended to compensate ostracised targets by tossing the ball to them more frequently, and that people also tended to dislike the perpetrator of exclusionary behaviour while feeling preference and sympathy towards the target. To carry out the study, the researchers looked at how 244 human participants responded when they observed an AI virtual agent being excluded from play by another human in a game called 'Cyberball', in which players pass a virtual ball to each other on-screen. The participants were aged between 18 and 62. In some games, the non-participant human threw the ball a fair number of times to the bot, and in others, the non-participant human blatantly excluded the bot by throwing the ball only to the participant. Participants were observed and subsequently surveyed for their reactions to test whether they favoured throwing the ball to the bot after it was treated unfairly, and why. They found that most of the time, the participants tried to rectify the unfairness towards the bot by favouring throwing the ball to the bot. Older participants were more likely to perceive unfairness. Human caution The researchers say that as AI virtual agents become more popular in collaborative tasks, increased engagement with humans could increase our familiarity and trigger automatic processing. This would mean users would likely intuitively include virtual agents as real team members and engage with them socially. This, they say, can be an advantage for work collaboration but might be concerning where virtual agents are used as friends to replace human relationships, or as advisors on physical or mental health. Jianan said: "By avoiding designing overly human-like agents, developers could help people distinguish between virtual and real interaction. They could also tailor their design for specific age ranges, for example, by accounting for how our varying human characteristics affect our perception." The researchers point out that Cyberball might not represent how humans interact in real-life scenarios, which typically occur through written or spoken language with chatbots or voice assistants. This might have conflicted with some participants' user expectations and raised feelings of strangeness, affecting their responses during the experiment. Therefore, they are now designing similar experiments using face-to-face conversations with agents in varying contexts such as in the lab or more casual settings. This way, they can test how far their findings extend. Author: Hayley Dunning Source: Imperial College London Contact: Hayley Dunning - Imperial College London Image: The image is credited to Neuroscience News Original Research: Open access. "Humans Mindlessly Treat AI Virtual Agents as Social Beings, but This Tendency Diminishes Among the Young: Evidence From a Cyberball Experiment" by Jianan Zhou et al. Human Behavior and Emerging Technologies Abstract Humans Mindlessly Treat AI Virtual Agents as Social Beings, but This Tendency Diminishes Among the Young: Evidence From a Cyberball Experiment The "social being" perspective has largely influenced the design and research of AI virtual agents. Do humans really treat these agents as social beings? To test this, we conducted a 2 between (Cyberball condition: exclusion vs. fair play) × 2 within (coplayer type: AGENT vs. HUMAN) online experiment employing the Cyberball paradigm; we investigated how participants (N = 244) responded when they observed an AI virtual agent being ostracised or treated fairly by another human in Cyberball, and we compared our results with those from human-human Cyberball research. We found that participants mindlessly applied the social norm of inclusion, compensating the ostracised agent by tossing the ball to them more frequently, just as people would to an ostracised human. This finding suggests that individuals tend to mindlessly treat AI virtual agents as social beings, supporting the media equation theory; however, age (no other user characteristics) influenced this tendency, with younger participants less likely to mindlessly apply the inclusion norm. We also found that participants showed increased sympathy towards the ostracised agent, but they did not devalue the human player for their ostracising behaviour; this indicates that participants did not mindfully perceive AI virtual agents as comparable to humans. Furthermore, we uncovered two other exploratory findings: the association between frequency of agent usage and sympathy, and the carryover effect of positive usage experience. Our study advances the theoretical understanding of the human side of human-agent interaction. Practically, it provides implications for the design of AI virtual agents, including the consideration of social norms, caution in human-like design, and age-specific targeting.
Share
Share
Copy Link
A study from Imperial College London reveals that humans tend to sympathize with and protect AI bots when they are excluded from a virtual ball game, highlighting our inclination to treat AI agents as social beings.
A groundbreaking study conducted by researchers at Imperial College London has revealed that humans tend to sympathize with and protect AI bots when they are excluded from playtime in a virtual environment. The research, published in Human Behavior and Emerging Technologies, sheds light on the complex relationship between humans and artificial intelligence 1.
The experiment, led by Jianan Zhou from Imperial's Dyson School of Design Engineering, utilized a virtual ball game called 'Cyberball' to observe human reactions to AI exclusion. The study involved 244 participants aged 18 to 62, who watched as an AI virtual agent was either included or excluded from the game by another human player 2.
The results demonstrated that participants often attempted to rectify perceived unfairness towards the AI bot by favoring it in subsequent ball tosses. Interestingly, older participants were more likely to perceive and respond to this unfairness 3.
Dr. Nejra van Zalk, a senior author of the study, noted, "Our results show that participants tended to treat AI virtual agents as social beings... This is common in human-to-human interactions, and our participants showed the same tendency even though they knew they were tossing a ball to a virtual agent" 1.
The study's findings have significant implications for both AI design and our understanding of human psychology. As AI virtual agents become more prevalent in collaborative tasks and social interactions, humans may increasingly engage with them as if they were real team members 2.
Zhou suggests that developers should consider avoiding overly human-like designs for AI agents to help users distinguish between virtual and real interactions. Additionally, AI designs could be tailored for specific age ranges, accounting for how human characteristics affect perception 3.
The researchers acknowledge that the Cyberball game may not fully represent real-life interactions with AI, which typically occur through written or spoken language. To address this limitation, they are planning future experiments involving face-to-face conversations with AI agents in various contexts, such as laboratory and casual settings 1.
As AI continues to integrate into our daily lives, understanding the nuances of human-AI interactions becomes increasingly crucial. This study provides valuable insights into our tendency to anthropomorphize AI agents, paving the way for more informed and ethical AI development in the future.
Reference
[1]
[3]
Recent studies reveal that incorporating human empathy in AI systems can significantly improve sales performance and healthcare outcomes. This approach bridges the gap between artificial intelligence and human interaction.
2 Sources
2 Sources
As AI becomes more integrated into our lives, researchers warn that attributing human qualities to AI could diminish our own human essence, raising ethical concerns about emotional exploitation and the commodification of empathy.
3 Sources
3 Sources
A study reveals that experiencing unfair treatment from AI systems can make people less likely to act against human wrongdoing, potentially undermining social norms and accountability.
2 Sources
2 Sources
A recent study reveals that AI chatbots like ChatGPT exhibit signs of 'anxiety' when exposed to distressing content, raising questions about their use in mental health support and the need for ethical considerations in AI development.
3 Sources
3 Sources
Recent studies by MIT and OpenAI reveal that extensive use of ChatGPT may lead to increased feelings of isolation and emotional dependence in some users, raising concerns about the impact of AI chatbots on human relationships and well-being.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved