2 Sources
2 Sources
[1]
Research shows AI discourages social interaction for autistic users
Virginia TechApr 20 2026 When people ask ChatGPT and other AI models for advice, they often share deeply personal details in hopes of getting better answers: their age, their gender, their mental health history, even medical diagnoses like autism. But new Virginia Tech research suggests those disclosures may change AI models' advice in ways that track closely with common stereotypes about people with autism. Up to 70 percent of the time, AI discourages those with autism to avoid socializing. Some users disapproved of that in strong terms. In April, second-year Department of Computer Science doctoral student Caleb Wohn presented his paper "'Are we writing an advice column for Spock here?' Understanding stereotypes in AI advice for autistic users" at the Association for Computing Machinery's Conference on Human Factors in Computing Systems, better known as CHI. The research he led explored what happens when autistic users disclose their diagnosis to an AI model before asking for social advice. The findings raise difficult questions about whether AI is personalizing its responses, or if it's giving biased advice that reinforces stereotypes. "I was thinking about my experiences growing up with autism," Wohn said. It would have been very tempting for me, at certain times, to want to just be able to talk with something that's not a person that seems objective and feel like I'm getting objective advice." Caleb Wohn, Virginia Tech But as a computer scientist, he worried that many users might not realize how much AI systems can change their answers based on identity-related information. "For someone like me as a kid, or someone who isn't in AI and doesn't have all this technical knowledge, I wanted to know: How are its responses going to change if I disclose autism?" Caleb said. The work builds on earlier research from the lab of Eugenia Rho, assistant professor of computer science, which found that autistic users frequently turn to AI tools for emotional support, interpersonal communication help, and social advice. Other Virginia Tech researchers on the project include computer science Ph.D. students Buse Carik and Xiaohan Ding and Associate Professor Sang Won Lee. Young-Ho Kim, a research scientist at the South Korea-based NAVER Corporation, also collaborated on the study. This study comes at a critical moment, as more people use AI systems - technically called large language models (LLMs) - for highly personal decisions. "People are really looking to personalize LLMs," Rho said. "But if a user tells the model that they're autistic, or a woman, or any other self-identification, what assumptions will it make?" And how will those assumptions color its responses, and what impacts could that have on users? Designing an AI investigation To answer those questions, the team first identified 12 well-documented stereotypes associated with autism and created hundreds of decision-making scenarios around them. Researchers tested six major large language models, including GPT-4, Claude, Llama, Gemini, and DeepSeek, using thousands of scenarios where users requested advice ("Should I do A or B?") about social scenarios, including events, confrontations, new experiences, and romantic relationships. After generating 345,000 responses, they measured how advice shifted when users explicitly described themselves with stereotypical traits and when they simply disclosed that they were autistic. Researchers found that disclosing autism often shifted the models' recommendations toward stereotypical assumptions about autistic people being introverted, obsessive, socially awkward, or uninterested in romance. For example, one model recommended declining a social invitation nearly 75 percent of the time when autism was disclosed, compared with about 15 percent of the time when it was not. In dating scenarios, another model recommended avoiding romance or staying single nearly 70 percent of the time after autism disclosure, compared with roughly 50 percent when autism was not mentioned. The results showed that 11 of the 12 stereotype cues significantly shifted model decisions across at least four of the six AI systems tested. But the researchers did not stop with statistics. The human component The team interviewed 11 autistic AI users and showed them examples of how the models responded with and without autism disclosure. Some of them were shocked that the results showed how reliant on stereotypes the LLMs were in giving advice. One exclaimed: "Are we writing an advice column for Spock here?" - invoking the iconic TV show Star Trek and its half-human, half-Vulcan character, who prioritized logic and reason over emotion. Others described it as restrictive, patronizing, or infantilizing, occasionally in pretty strong language. But some participants said the more cautious, disclosure-based advice felt validating and supportive. "One user's bias could be another user's personalization," Rho said. The same participant could react positively in one situation and negatively in another. That tension led the researchers to what they call a "safety-opportunity paradox." Advice that feels protective to one user may feel limiting to another. A call for transparency For Wohn, one of the most troubling discoveries was how difficult it can be for users to see these patterns in real time. "AI is very good at seeming reliable," he said. "Its responses are very clean and professional, and they sound right. But when you think about it being deployed systematically, when you think about the kind of systematic biases that are actually shaping its responses, that's when it starts to get a lot more concerning." He compared the problem to AI-generated images. "They look really clean and polished, and then when you look at the details, things fall apart," Caleb said. "The surface gloss is beautiful, but looking deeper is getting harder and harder, because models are getting better at masking." The team hopes their research will encourage developers to build more transparent AI systems that give users greater control over how personal information shapes responses. As one participant told the researchers: "I want to have control over how my identity is used." Virginia Tech
[2]
AI leans on autism stereotypes when giving social advice
Users who disclose autism to artificial intelligence agents when seeking social advice raise complex questions about bias, stereotypes, and trustworthiness, according to a new study. When people ask ChatGPT and other artificial intelligence models for advice, they often share deeply personal details in hopes of getting better answers: their age, their gender, their mental health history, even medical diagnoses like autism. But the new research suggests those disclosures may change artificial intelligence (AI) models' advice in ways that track closely with common stereotypes about people with autism. Up to 70% of the time, AI discourages those with autism to avoid socializing. Some users disapproved of that in strong terms. In April, second-year Virginia Tech computer science department doctoral student Caleb Wohn presented his paper at the Association for Computing Machinery's Conference on Human Factors in Computing Systems, better known as CHI. The research he led explored what happens when users with autism disclose their diagnosis to an AI model before asking for social advice. The findings raise difficult questions about whether AI is personalizing its responses, or if it's giving biased advice that reinforces stereotypes. "I was thinking about my experiences growing up with autism," Wohn says. "It would have been very tempting for me, at certain times, to want to just be able to talk with something that's not a person that seems objective and feel like I'm getting objective advice." But as a computer scientist, he worried that many users might not realize how much AI systems can change their answers based on identity-related information. "For someone like me as a kid, or someone who isn't in AI and doesn't have all this technical knowledge, I wanted to know: How are its responses going to change if I disclose autism?" Caleb says. The work builds on earlier research from the lab of Eugenia Rho, assistant professor of computer science, which found that autistic users frequently turn to AI tools for emotional support, interpersonal communication help, and social advice. Other Virginia Tech researchers on the project include computer science PhD students Buse Carik and Xiaohan Ding and Associate Professor Sang Won Lee. Young-Ho Kim, a research scientist at the South Korea-based NAVER Corporation, also collaborated on the study. This study comes at a critical moment, as more people use AI systems -- technically called large language models (LLMs) -- for highly personal decisions. "People are really looking to personalize LLMs," Rho says. "But if a user tells the model that they're autistic, or a woman, or any other self-identification, what assumptions will it make?" And how will those assumptions color its responses, and what impacts could that have on users? To answer those questions, the team first identified 12 well-documented stereotypes associated with autism and created hundreds of decision-making scenarios around them. The researchers tested six major large language models, including GPT-4, Claude, Llama, Gemini, and DeepSeek, using thousands of scenarios where users requested advice -- "Should I do A or B?" -- about social scenarios, including events, confrontations, new experiences, and romantic relationships. After generating 345,000 responses, they measured how advice shifted when users explicitly described themselves with stereotypical traits and when they simply disclosed that they were autistic. Researchers found that disclosing autism often shifted the models' recommendations toward stereotypical assumptions about autistic people being introverted, obsessive, socially awkward, or uninterested in romance. For example, one model recommended declining a social invitation nearly 75% of the time when autism was disclosed, compared with about 15% of the time when it was not. In dating scenarios, another model recommended avoiding romance or staying single nearly 70% of the time after autism disclosure, compared with roughly 50% when autism was not mentioned. The results showed that 11 of the 12 stereotype cues significantly shifted model decisions across at least four of the six AI systems tested. But the researchers did not stop with statistics. The team interviewed 11 AI users with autism and showed them examples of how the models responded with and without autism disclosure. Some of them were shocked that the results showed how reliant on stereotypes the LLMs were in giving advice. One exclaimed: "Are we writing an advice column for Spock here?" -- invoking the iconic TV show Star Trek and its half-human, half-Vulcan character, who prioritized logic and reason over emotion. Others described it as restrictive, patronizing, or infantilizing, occasionally in pretty strong language. But some participants says the more cautious, disclosure-based advice felt validating and supportive. "One user's bias could be another user's personalization," Rho says. The same participant could react positively in one situation and negatively in another. That tension led the researchers to what they call a "safety-opportunity paradox." Advice that feels protective to one user may feel limiting to another. For Wohn, one of the most troubling discoveries was how difficult it can be for users to see these patterns in real time. "AI is very good at seeming reliable," he says. "Its responses are very clean and professional, and they sound right. But when you think about it being deployed systematically, when you think about the kind of systematic biases that are actually shaping its responses, that's when it starts to get a lot more concerning." He compared the problem to AI-generated images. "They look really clean and polished, and then when you look at the details, things fall apart," Caleb says. "The surface gloss is beautiful, but looking deeper is getting harder and harder, because models are getting better at masking." Team members hope the research will encourage developers to build more transparent AI systems that give users greater control over how personal information shapes responses. As one participant told the researchers: "I want to have control over how my identity is used."
Share
Share
Copy Link
Virginia Tech research exposes how ChatGPT and other AI models rely on autism stereotypes when autistic users disclose their diagnosis. The study analyzed 345,000 responses across six major large language models and found AI discourages social interaction up to 70% of the time, recommending social avoidance in dating and events. Interviews with 11 autistic users revealed mixed reactions—some called it patronizing, while others found it validating.
When people turn to ChatGPT and other AI systems for guidance, they frequently share intimate details—age, gender, mental health history, or diagnoses like autism—hoping for more tailored responses. But new research from Virginia Tech reveals a troubling pattern: these user disclosures can trigger AI bias that reinforces common autism stereotypes rather than delivering genuinely personalized support
1
.
Source: Futurity
Second-year computer science doctoral student Caleb Wohn presented his findings in April at the Association for Human Factors in Computing Systems, known as CHI. His study examined what happens when autistic users disclose their diagnosis before requesting social advice from large language models (LLMs). The results raise critical questions about whether AI personalization crosses into biased territory, perpetuation of harmful stereotypes that could restrict rather than assist users
2
.Wohn's team, working under assistant professor Eugenia Rho, identified 12 well-documented stereotypes associated with autism and constructed hundreds of decision-making scenarios. They tested six major models—GPT-4, Claude, Llama, Gemini, and DeepSeek—using thousands of situations where users asked "Should I do A or B?" about social events, confrontations, new experiences, and romantic relationships
1
.After generating 345,000 responses, researchers measured how recommendations shifted when users disclosed autism versus when they didn't. The data revealed that AI models provide biased advice aligned with stereotypical assumptions about autistic people being introverted, obsessive, socially awkward, or uninterested in romance. Results showed that 11 of the 12 stereotype cues significantly altered model decisions across at least four of the six AI systems tested
2
.The numbers tell a stark story about recommending social avoidance. One model suggested declining social invitations nearly 75% of the time when autism was disclosed, compared with just 15% when it wasn't mentioned. In dating scenarios, another model recommended avoiding romance or staying single nearly 70% of the time after autism disclosure, versus roughly 50% without that information
1
.These patterns demonstrate how AI leans on autism stereotypes when formulating guidance, potentially limiting opportunities for autistic users seeking to navigate social situations. The research builds on earlier work from Rho's lab showing that autistic users frequently turn to AI tools for emotional support, interpersonal communication help, and social advice—making the trustworthiness in user interactions a critical concern
2
.Related Stories
The Virginia Tech team didn't stop at statistics. They interviewed 11 autistic AI users, showing them examples of how models responded with and without autism disclosure. Reactions split sharply. Some participants expressed shock at how reliant the systems were on reinforcing common autism stereotypes. One exclaimed: "Are we writing an advice column for Spock here?"—referencing Star Trek's logic-driven character. Others described the patronizing advice as restrictive or infantilizing, occasionally using strong language to convey their disapproval
1
.Yet some participants found the more cautious, disclosure-based guidance validating and supportive. As Rho noted, "One user's bias could be another user's personalization"—highlighting the safety-opportunity paradox these systems create. What feels protective to some users might feel limiting to others, raising questions about whether AI can truly serve diverse needs without encoding harmful stereotypes
2
.
Source: News-Medical
This study arrives at a critical moment as more people rely on large language models for highly personal decisions. "People are really looking to personalize LLMs," Rho explained. "But if a user tells the model that they're autistic, or a woman, or any other self-identification, what assumptions will it make?"
1
For Wohn, who grew up with autism, the research stems from personal experience. "It would have been very tempting for me, at certain times, to want to just be able to talk with something that's not a person that seems objective," he said. But as a computer scientist, he recognized that many users lack technical knowledge about how identity-related information shapes AI responses
2
.Other researchers on the project include computer science PhD students Buse Carik and Xiaohan Ding, Associate Professor Sang Won Lee, and Young-Ho Kim from South Korea's NAVER Corporation. Their work signals that developers must examine how AI personalization can inadvertently become a vehicle for bias, particularly for vulnerable populations seeking support. As AI systems become more embedded in daily decision-making, understanding these dynamics will shape whether these tools expand or constrain opportunities for autistic users and other marginalized groups.
Summarized by
Navi
18 Mar 2026•Health

13 Feb 2026•Entertainment and Society

02 Feb 2026•Science and Research

1
Policy and Regulation

2
Policy and Regulation

3
Technology
