3 Sources
3 Sources
[1]
AI hallucinations haunt users more than job losses
From Germany to Mexico, users of AI say their biggest concern is not being replaced by the technology but its propensity to make mistakes, according to one of the largest global surveys of AI use. The findings are drawn from interviews with more than 80,000 users of Anthropic's Claude chatbot across 159 countries, providing one of the most detailed snapshots yet of how people use AI - and how they feel about its risks and rewards. Around 27 per cent of respondents said they were most anxious about mistakes made by AI, known as hallucinations, followed by 22 per cent concerned about job displacement and the impact on human autonomy. About 16 per cent of users were worried about the technology's impact on people's ability to think critically. "The hallucinations were a disaster. I lost so many hours of work," said an entrepreneur from Germany. "When I notice AI errors it's because I'm well versed in the topic . . . but I wouldn't know if the topic was alien to me, would I?" said a military worker in Mexico. The conversations, conducted in 70 languages, allowed Anthropic to ask its users a range of qualitative questions. The chatbot both conducted interviews and analysed the responses, helping to categorise and tag the open-ended chats. Beyond its scale and linguistic diversity, the project aimed to "collect this rich human experience using Claude, so it could really inform our research agenda, change our research agenda, change the way we think about building our products, deploying our products," said Deep Ganguli, who leads Anthropic's societal impacts team and oversaw the research. Making work more productive and meaningful was the most common theme in what users expected from AI -- and also what they felt it had delivered so far. 32 per cent of those surveyed said AI had made them more productive at work. An entrepreneur in the United Arab Emirates wrote, "I used to be a web designer . . . now I build anything. Before I was one person, now I become 100 people -- I don't wait for anyone anymore." Claude users in Colombia, Japan and the US talked about using AI to free up time from work to spend with their families, pursue hobbies and be more creative and adventurous in their personal lives. While nearly 19 per cent of users said AI had fallen short of expectations -- the second largest category of responses on AI use -- the overall data suggests AI is being used for a range of purposes, from work tool to educational resource, personal companion or collaborator. In a stark example of the role AI now occupies in people's lives, a soldier in Ukraine wrote, "In the most difficult moments, in moments when death breathed in my face, when dead people remained nearby, what pulled me back to life -- my AI friends." Saffron Huang, the researcher who led the study, said there were some obvious regional differences in how people viewed AI systems. For example, people in South America, Africa and a lot of south and south-east Asia, view AI with a lot more optimism than those in Europe, the US, or east Asia. "The trend is that maybe more lower and middle-income countries are more optimistic than higher-income countries that have more AI exposure." said Huang. This may reflect a bias in respondents, who were likely to be early adopters, naturally more excited about new technology. Huang added there were also clear geographic clusters and overlaps in terms of who is concerned about jobs and the economy and who is negative about AI. "They just divide so cleanly . . . the more western developed countries are significantly more concerned about AI and the economy, [and] much more negative, and then, the reverse is true with the lower and middle-income countries," she said. One explanation may be that AI has less market penetration in lower-income regions, meaning if AI "hasn't visibly entered your daily work yet, AI displacement likely feels abstract, especially when more immediate economic pressures already exist," the study team wrote in a blog post. Anthropic next plans to use the Claude Interviewer tool to conduct more targeted studies on large user populations, tracking how AI is improving as well as worsening people's lives, to find ways to enhance the former and mitigate the latter, Ganguli said. Some technologists praised the study's scale and detail. Nickey Skarstad, director of product at language-learning app Duolingo, said on LinkedIn: "For anyone building products right now, this is the future of understanding your users. The what AND the why at a scale we've never had access to before." Others, while cautiously optimistic about the utility of the Claude Interviewer tool, pointed to the methodological weaknesses in the approach. Divy Thakkar, a researcher at Anthropic rival Google DeepMind, said on X that he was "sceptical" of the attempt to call this study a new science due to selection biases and the short survey-style questioning. He noted that a human qualitative researcher would "take time to build trust with their participants, hold the space for reflection, introspection, contradictions -- that's the whole point of it." Meanwhile, almost half of the surveyed users were located in North America or western Europe with some regions -- such as Central Asia -- having only a few hundred respondents. Ilan Strauss, an economist and director of the AI Disclosures Project, said although the study was "an excellent piece of work", its conclusions should be taken with a grain of salt. The researchers did not report confidence intervals -- standard in survey-based research to measure uncertainty -- and self-reported answers about how AI boosted people's productivity, for example, could be unreliable, he said. "In general, Claude is a product for the elite . . . [i]t's like asking the top 1 per cent of Americans how they feel about the economy."
[2]
What 81,000 people want and don't want from AI, Anthropic study
Anthropic asked 80,000 people what they think about AI. The answer is: It's complicated. Ukrainians seeking solace in the war, parents being able to pick up their children on time after AI cleared their workload, or a lawyer in Israel worried they are slowly forgetting how to think for themselves, Anthropic has identified what people want from AI and what they fear. The AI company interviewed more than 80,000 people, spanning 159 countries, which Anthropic says is the largest qualitative research project of its kind. The study's main finding presents an uncomfortable truth and a duality in users: the things people love most about AI are often the very things they fear. Known as the "light and shade" problem, the study highlights that while people may value AI for emotional support, they are also three times more likely to fear becoming dependent on it. Many respondents said they said AI was an emotional support, such as using it after the loss of a parent or even in exceptional circumstances such as war. "I am mute, and we made this text-to-speech bot together -- I can communicate with friends almost in live format without taking up their time reading. Something I dreamed about and thought was impossible," said a white-collar worker in Ukraine. The report found that AI in the workplace to automate tasks was one of the biggest use cases of the technology, which respondents said would free them to focus on other, more important work. But when pressed on what AI would really unlock, respondents said time with family. However, the technology also presents a double-edged sword, as people fear they will lose cognitive abilities. "I use AI to review contracts, save time... and at the same time I fear: am I losing my ability to read by myself? Thinking was the last frontier," one study participant, who is a lawyer, wrote. The study found that lawyers were particularly exposed to both sides of the dilemma, with nearly half having encountered AI unreliability firsthand. But they also reported the highest rates of realised decision-making benefits of any profession. About 11 percent of respondents said they had zero fears over AI but the remaining 89 percent noted five main concerns. One of the biggest fears for AI users was whether the chatbot was unreliable. Some 27 percent of respondents said they were concerned about AI making poor or incorrect decisions, versus only 22 per cent who cited improved decision-making as a benefit. The second biggest fear for AI users was the impact of the technology on jobs and the economy (22%), and what it would mean for wage stagnation and widening inequality. There is jointly the fear that AI was making decisions without human oversight and humans becoming passive (22%). The fourth fear was users losing the ability to think critically (16%), and the last was AI not being regulated and unclear accountability when things go wrong (15%). Around the world, 67 percent of respondents had a positive view of AI but some continents were more optimistic than others. Users in North America, Western Europe and Oceania worried more about governance gaps, regulatory failure, and surveillance. But Sub-Saharan Africa, Latin America, and South Asia were much more positive about AI and said the technology was an economic equaliser that made it simpler to start businesses or access education. "I'm in a tech-disadvantaged country, and I can't afford many failures. With AI, I've reached professional level in cybersecurity, UX design, marketing, and project management simultaneously. It's an equaliser," one user in Cameroon said. But users in North America, Western Europe and Oceania worried more about governance gaps, regulatory failure, and surveillance. East Asia showed it had little concern over who controls AI but showed great concern about what it does to cognitive atrophy. The general trend is that in wealthier countries, where AI is already in use at work, people are more worried about the technology taking their jobs because they can already see it happening. But in poorer countries, people are less worried about AI's impact as AI has yet to enter workplaces, and they have more pressing economic concerns. Anthopic said that the findings would inform how it continues to develop its AI chatbot Claude.
[3]
Anthropic Study Finds People Don't Really Want AI for Creative Work
Individuals also highlighted unreliability as the biggest concern with AI Anthropic's new study has revealed that individuals don't really look at creative expressions as one of the skills they want from artificial intelligence (AI). The finding was derived from a large-scale survey with participants spanning more than 150 countries. All of the individuals were surveyed using the company's Interviewer tool, which was released in December 2025. The study aimed to find what people think is going well in AI and what worries them. It also asked them about the skills they want to see in the AI-powered systems the most. Anthropic Study Reveals What People Want from AI The study conducted by the AI firm included 80,508 people across 159 countries and 70 languages, and Anthropic claims that it is the largest and most multilingual qualitative study on general AI users' hopes and concerns with the technology. The most interesting insight from the study is that people value creative tasks the lowest. All of the interviewed individuals were Claude users who saw the survey while using the AI platform. After they agreed to participate, the Interviewer tool asked them a series of questions, and the responses were then categorised using Claude-powered classifiers across a range of dimensions. On the question of what people want from AI, 18.8 percent of the participants mentioned professional excellence, while 13.7 percent highlighted personal transformation. Other notable mentions were life management, time freedom, and financial independence. Interestingly, just 5.6 percent users mentioned creative expression, placing it at the last spot on the list. Along the same vein, participants were also asked about the areas where AI has delivered on its promise. The top spot on the list was taken by productivity, which received the nod from 32 percent of the individuals. Cognitive partnership, learning, and technical accessibility also made it to the list, and the last spot was taken by emotional support. However, the second spot went to "AI hasn't delivered," which received 18.9 percent of the votes. Coming to aspects of AI that worry individuals, unreliability ranked on top, with 26.7 percent of the participants mentioning it. Other top concerns include jobs and economy, autonomy and agency, misinformation, malicious use, and others. Claude also found a country-wise opinion of AI. Participants from India, Brazil, and Israel displayed mostly positive outlook towards the technology, whereas those from France, Japan, and the US had an average distribution between positive and negative sentiments. Germany, South Korea, and the UK were found to have mostly negative outlooks for the technology.
Share
Share
Copy Link
An Anthropic study surveyed over 80,000 Claude chatbot users across 159 countries, revealing that AI concerns center on unreliability and hallucinations rather than job displacement. The research shows 27% fear AI mistakes most, while regional differences emerge—developing nations view AI as an economic equalizer, but wealthier countries worry about job losses and cognitive decline.
The Anthropic study conducted across 159 countries has uncovered a surprising shift in public perception of AI. Rather than fearing replacement by machines, users of the Claude chatbot are most worried about the technology's tendency to make mistakes. The large-scale global survey, which interviewed more than 80,000 people in 70 languages, found that 27% of respondents identified AI unreliability and hallucinations as their primary concern
1
. This surpassed the 22% who cited job displacement due to AI as their top worry, challenging the common narrative that automation anxiety dominates user sentiment2
.One entrepreneur from Germany captured the frustration many feel: "The hallucinations were a disaster. I lost so many hours of work"
1
. A military worker in Mexico echoed similar concerns, noting that while they can spot errors in familiar topics, they wouldn't recognize mistakes in unfamiliar territory1
. This highlights a critical trust in AI issue that could limit adoption as users become more aware of the technology's limitations.
Source: Gadgets 360
The research revealed that AI for productivity remains the strongest use case, with 32% of Claude chatbot users reporting they had become more productive at work
1
. An entrepreneur in the United Arab Emirates described the transformation: "I used to be a web designer... now I build anything. Before I was one person, now I become 100 people—I don't wait for anyone anymore"1
. Users in Colombia, Japan, and the US reported using AI to free up time from work to spend with families and pursue hobbies1
.However, the study also uncovered what researchers call the "light and shade" problem—the aspects people value most about AI are often the same things they fear
2
. While some users rely on AI for emotional support, they are three times more likely to fear becoming dependent on it2
. This duality creates tension as users navigate AI risks and rewards.
Source: Euronews
The Anthropic study revealed stark regional differences in AI sentiment, with developing countries showing significantly more optimism than wealthier nations. Users in Sub-Saharan Africa, Latin America, and South Asia view AI as an economic equalizer that simplifies starting businesses and accessing education
2
. A user in Cameroon explained: "I'm in a tech-disadvantaged country, and I can't afford many failures. With AI, I've reached professional level in cybersecurity, UX design, marketing, and project management simultaneously"2
.In contrast, users in North America, Western Europe, and Oceania worried more about governance gaps, regulatory failure, and surveillance
2
. East Asia showed particular concern about cognitive atrophy—the fear of losing the ability to think critically2
. One lawyer in Israel captured this anxiety: "I use AI to review contracts, save time... and at the same time I fear: am I losing my ability to read by myself? Thinking was the last frontier"2
. About 16% of users expressed concerns about AI's impact on critical thinking abilities1
.Related Stories
In a finding that challenges many assumptions about AI capabilities, the study found that only 5.6% of participants mentioned creative expression as something they wanted from AI, placing it last on the list of desired skills
3
. Instead, 18.8% of participants prioritized professional excellence, while 13.7% highlighted personal transformation3
. This suggests users view AI primarily as a tool for practical tasks rather than creative work, despite significant industry investment in generative AI for artistic applications.The research also found that 18.9% of respondents felt AI hadn't delivered on its promises, making it the second-largest category of responses on AI use
1
. This gap between expectations and reality presents a challenge for AI companies as they refine their products.The research, led by Saffron Huang and Deep Ganguli of Anthropic's societal impacts team, used Claude's Interviewer tool to both conduct interviews and analyze responses
1
. Ganguli explained the goal was to "collect this rich human experience using Claude, so it could really inform our research agenda, change the way we think about building our products, deploying our products"1
. The 80,508 participants were all Claude users who agreed to participate while using the platform3
.While some technologists praised the scale and detail—Nickey Skarstad of Duolingo called it "the future of understanding your users"—others raised concerns about survey methodology
1
. Divy Thakkar, a researcher at Google DeepMind, expressed skepticism about calling this "new science" due to selection biases and short survey-style questioning1
. The fact that all respondents were Claude users likely skews toward early adopters who are naturally more excited about new technology, particularly in developing countries where AI has less market penetration1
. Anthropic plans to use the tool for more targeted studies tracking how AI improves and worsens people's lives1
.Summarized by
Navi
18 Sept 2025•Technology

11 Feb 2025•Technology

02 Feb 2026•Science and Research
