5 Sources
5 Sources
[1]
AI hallucinations haunt users more than job losses
From Germany to Mexico, users of AI say their biggest concern is not being replaced by the technology but its propensity to make mistakes, according to one of the largest global surveys of AI use. The findings are drawn from interviews with more than 80,000 users of Anthropic's Claude chatbot across 159 countries, providing one of the most detailed snapshots yet of how people use AI - and how they feel about its risks and rewards. Around 27 per cent of respondents said they were most anxious about mistakes made by AI, known as hallucinations, followed by 22 per cent concerned about job displacement and the impact on human autonomy. About 16 per cent of users were worried about the technology's impact on people's ability to think critically. "The hallucinations were a disaster. I lost so many hours of work," said an entrepreneur from Germany. "When I notice AI errors it's because I'm well versed in the topic . . . but I wouldn't know if the topic was alien to me, would I?" said a military worker in Mexico. The conversations, conducted in 70 languages, allowed Anthropic to ask its users a range of qualitative questions. The chatbot both conducted interviews and analysed the responses, helping to categorise and tag the open-ended chats. Beyond its scale and linguistic diversity, the project aimed to "collect this rich human experience using Claude, so it could really inform our research agenda, change our research agenda, change the way we think about building our products, deploying our products," said Deep Ganguli, who leads Anthropic's societal impacts team and oversaw the research. Making work more productive and meaningful was the most common theme in what users expected from AI -- and also what they felt it had delivered so far. 32 per cent of those surveyed said AI had made them more productive at work. An entrepreneur in the United Arab Emirates wrote, "I used to be a web designer . . . now I build anything. Before I was one person, now I become 100 people -- I don't wait for anyone anymore." Claude users in Colombia, Japan and the US talked about using AI to free up time from work to spend with their families, pursue hobbies and be more creative and adventurous in their personal lives. While nearly 19 per cent of users said AI had fallen short of expectations -- the second largest category of responses on AI use -- the overall data suggests AI is being used for a range of purposes, from work tool to educational resource, personal companion or collaborator. In a stark example of the role AI now occupies in people's lives, a soldier in Ukraine wrote, "In the most difficult moments, in moments when death breathed in my face, when dead people remained nearby, what pulled me back to life -- my AI friends." Saffron Huang, the researcher who led the study, said there were some obvious regional differences in how people viewed AI systems. For example, people in South America, Africa and a lot of south and south-east Asia, view AI with a lot more optimism than those in Europe, the US, or east Asia. "The trend is that maybe more lower and middle-income countries are more optimistic than higher-income countries that have more AI exposure." said Huang. This may reflect a bias in respondents, who were likely to be early adopters, naturally more excited about new technology. Huang added there were also clear geographic clusters and overlaps in terms of who is concerned about jobs and the economy and who is negative about AI. "They just divide so cleanly . . . the more western developed countries are significantly more concerned about AI and the economy, [and] much more negative, and then, the reverse is true with the lower and middle-income countries," she said. One explanation may be that AI has less market penetration in lower-income regions, meaning if AI "hasn't visibly entered your daily work yet, AI displacement likely feels abstract, especially when more immediate economic pressures already exist," the study team wrote in a blog post. Anthropic next plans to use the Claude Interviewer tool to conduct more targeted studies on large user populations, tracking how AI is improving as well as worsening people's lives, to find ways to enhance the former and mitigate the latter, Ganguli said. Some technologists praised the study's scale and detail. Nickey Skarstad, director of product at language-learning app Duolingo, said on LinkedIn: "For anyone building products right now, this is the future of understanding your users. The what AND the why at a scale we've never had access to before." Others, while cautiously optimistic about the utility of the Claude Interviewer tool, pointed to the methodological weaknesses in the approach. Divy Thakkar, a researcher at Anthropic rival Google DeepMind, said on X that he was "sceptical" of the attempt to call this study a new science due to selection biases and the short survey-style questioning. He noted that a human qualitative researcher would "take time to build trust with their participants, hold the space for reflection, introspection, contradictions -- that's the whole point of it." Meanwhile, almost half of the surveyed users were located in North America or western Europe with some regions -- such as Central Asia -- having only a few hundred respondents. Ilan Strauss, an economist and director of the AI Disclosures Project, said although the study was "an excellent piece of work", its conclusions should be taken with a grain of salt. The researchers did not report confidence intervals -- standard in survey-based research to measure uncertainty -- and self-reported answers about how AI boosted people's productivity, for example, could be unreliable, he said. "In general, Claude is a product for the elite . . . [i]t's like asking the top 1 per cent of Americans how they feel about the economy."
[2]
What 81,000 people want and don't want from AI, Anthropic study
Anthropic asked 80,000 people what they think about AI. The answer is: It's complicated. Ukrainians seeking solace in the war, parents being able to pick up their children on time after AI cleared their workload, or a lawyer in Israel worried they are slowly forgetting how to think for themselves, Anthropic has identified what people want from AI and what they fear. The AI company interviewed more than 80,000 people, spanning 159 countries, which Anthropic says is the largest qualitative research project of its kind. The study's main finding presents an uncomfortable truth and a duality in users: the things people love most about AI are often the very things they fear. Known as the "light and shade" problem, the study highlights that while people may value AI for emotional support, they are also three times more likely to fear becoming dependent on it. Many respondents said they said AI was an emotional support, such as using it after the loss of a parent or even in exceptional circumstances such as war. "I am mute, and we made this text-to-speech bot together -- I can communicate with friends almost in live format without taking up their time reading. Something I dreamed about and thought was impossible," said a white-collar worker in Ukraine. The report found that AI in the workplace to automate tasks was one of the biggest use cases of the technology, which respondents said would free them to focus on other, more important work. But when pressed on what AI would really unlock, respondents said time with family. However, the technology also presents a double-edged sword, as people fear they will lose cognitive abilities. "I use AI to review contracts, save time... and at the same time I fear: am I losing my ability to read by myself? Thinking was the last frontier," one study participant, who is a lawyer, wrote. The study found that lawyers were particularly exposed to both sides of the dilemma, with nearly half having encountered AI unreliability firsthand. But they also reported the highest rates of realised decision-making benefits of any profession. About 11 percent of respondents said they had zero fears over AI but the remaining 89 percent noted five main concerns. One of the biggest fears for AI users was whether the chatbot was unreliable. Some 27 percent of respondents said they were concerned about AI making poor or incorrect decisions, versus only 22 per cent who cited improved decision-making as a benefit. The second biggest fear for AI users was the impact of the technology on jobs and the economy (22%), and what it would mean for wage stagnation and widening inequality. There is jointly the fear that AI was making decisions without human oversight and humans becoming passive (22%). The fourth fear was users losing the ability to think critically (16%), and the last was AI not being regulated and unclear accountability when things go wrong (15%). Around the world, 67 percent of respondents had a positive view of AI but some continents were more optimistic than others. Users in North America, Western Europe and Oceania worried more about governance gaps, regulatory failure, and surveillance. But Sub-Saharan Africa, Latin America, and South Asia were much more positive about AI and said the technology was an economic equaliser that made it simpler to start businesses or access education. "I'm in a tech-disadvantaged country, and I can't afford many failures. With AI, I've reached professional level in cybersecurity, UX design, marketing, and project management simultaneously. It's an equaliser," one user in Cameroon said. But users in North America, Western Europe and Oceania worried more about governance gaps, regulatory failure, and surveillance. East Asia showed it had little concern over who controls AI but showed great concern about what it does to cognitive atrophy. The general trend is that in wealthier countries, where AI is already in use at work, people are more worried about the technology taking their jobs because they can already see it happening. But in poorer countries, people are less worried about AI's impact as AI has yet to enter workplaces, and they have more pressing economic concerns. Anthopic said that the findings would inform how it continues to develop its AI chatbot Claude.
[3]
Anthropic Study Finds People Don't Really Want AI for Creative Work
Individuals also highlighted unreliability as the biggest concern with AI Anthropic's new study has revealed that individuals don't really look at creative expressions as one of the skills they want from artificial intelligence (AI). The finding was derived from a large-scale survey with participants spanning more than 150 countries. All of the individuals were surveyed using the company's Interviewer tool, which was released in December 2025. The study aimed to find what people think is going well in AI and what worries them. It also asked them about the skills they want to see in the AI-powered systems the most. Anthropic Study Reveals What People Want from AI The study conducted by the AI firm included 80,508 people across 159 countries and 70 languages, and Anthropic claims that it is the largest and most multilingual qualitative study on general AI users' hopes and concerns with the technology. The most interesting insight from the study is that people value creative tasks the lowest. All of the interviewed individuals were Claude users who saw the survey while using the AI platform. After they agreed to participate, the Interviewer tool asked them a series of questions, and the responses were then categorised using Claude-powered classifiers across a range of dimensions. On the question of what people want from AI, 18.8 percent of the participants mentioned professional excellence, while 13.7 percent highlighted personal transformation. Other notable mentions were life management, time freedom, and financial independence. Interestingly, just 5.6 percent users mentioned creative expression, placing it at the last spot on the list. Along the same vein, participants were also asked about the areas where AI has delivered on its promise. The top spot on the list was taken by productivity, which received the nod from 32 percent of the individuals. Cognitive partnership, learning, and technical accessibility also made it to the list, and the last spot was taken by emotional support. However, the second spot went to "AI hasn't delivered," which received 18.9 percent of the votes. Coming to aspects of AI that worry individuals, unreliability ranked on top, with 26.7 percent of the participants mentioning it. Other top concerns include jobs and economy, autonomy and agency, misinformation, malicious use, and others. Claude also found a country-wise opinion of AI. Participants from India, Brazil, and Israel displayed mostly positive outlook towards the technology, whereas those from France, Japan, and the US had an average distribution between positive and negative sentiments. Germany, South Korea, and the UK were found to have mostly negative outlooks for the technology.
[4]
AI Hallucinations Worry Users More Than Threat of Job Loss | PYMNTS.com
New research by Anthropic shows that more people would say "yes" to the first part of that question than to the second. The findings -- released last week and flagged in a report Sunday (March 22) by the Financial Times (FT) -- showed that just under 27% of respondents said they were most concerned about mistakes made by AI. "I had to take photos to convince the AI it was wrong -- it felt like talking to a person who wouldn't admit their mistake," said a user from Brazil quoted in the report. "The hallucinations were a disaster. I lost so many hours of work," said a German entrepreneur, one of 81,000 people interviewed for the study. Meanwhile, 22% said they were worried about AI's impact on jobs and the economy, while 16% mentioned "cognitive atrophy" or a loss of critical thinking. "The risk isn't losing your ability to think -- it's losing your perspective: you start adopting the AI's way of structuring things without even noticing," a user from Germany said. Deep Ganguli, who heads Anthropic's societal impacts team and oversaw the research, told the FT the project was designed to "collect this rich human experience using Claude, so it could really inform our research agenda, change our research agenda, change the way we think about building our products, deploying our products." The findings come amid a wave of AI-related job losses, with several companies pointing to the technology when announcing layoffs recently. But as PYMNTS has written, although job cuts tied back to AI invariably foster fears of a larger employment crisis, current labor research indicates that the situation is more complex. That report cited findings from the World Economic Forum, which argued that while automation and AI will eliminate the need for certain tasks, they will also bring about new categories of work, especially in data, AI oversight, cybersecurity and human-centric services. The report stressed that this will lead to a time of transition rather than permanent contraction. Many workers' skills are expected to evolve in the next five years, which will mean retraining and adaptation. "The pressure is real, but it is directional. Roles centered on routine information processing are most exposed. Roles combining domain expertise, judgment and technological fluency are expanding," PYMNTS added.
[5]
Can Heavy Use of AI Cause Cognitive Problems? Anthropic Study has Answers
The research is based on a study around what people are looking for from AI as a tool. The study was conducted by an AI Interviewer that the company launched last December A new study published by Anthropic has listed out how real people view the growth of AI over the past three years and what they expect from it in the future. While 32% are looking to achieve higher productivity, a startling 17% of the respondents are also worried about cognitive atrophy generated by its overuse. "Troublingly, educators were 2.5-3 times more likely than average to report having witnessed cognitive atrophy firsthand, presumably in their students," the study says. It also notes that students made up a key section of the 80,508 people across 59 countries speaking 70 languages who made up the sample size for Anthropic's research. Doomsday predictors connected the study with a 2022 research by the Harvard Medical School on people aged between 3 to 45. That study had reported a 5.5% average decline in IQ through long-term cannabis use. This resulted in deficits in learning and processing speed compared to those who did not use it. Does this mean AI usage gives users a high? Maybe, that's a tad over-the-top. The report acknowledges that concerns over AI were varied and concrete with people laying out specifics like structural changes around how governments and companies will deploy AI. "Others were more personal: a fear that AI might diminish one's own thinking, creativity, or relationships," Anthropic said. (Check out the full report here) Of course, about 11% of people expressed no concern as they saw AI as a tool, comparing it to electricity or the internet, or they otherwise felt confident that problems that arose because of it could be solved through adaptation. However, more than cognitive atrophy, unreliability was something that 27% were concerned about. This segment felt that AI won't do what it's supposed to. Respondents were wary about jobs and the economy (22%) and about maintaining human autonomy and agency (22%). These were part of the same bucket of concerns that included unreliability. Then there was a long tail wrapped around areas such as bias and discrimination, IP and data rights, environmental costs, harm to children and vulnerable groups, democracy and political integrity or geopolitics. In the research note, Anthropic points out that what people want from AI and what they fear are tightly bound. "We found five recurring tensions between directly competing benefits and harms that were discussed. There is a tension between using AI to learn and growing so reliant on it that you cease thinking for yourself; between being impressed by AI's judgment but also burned by its mistakes," the report said. It further adds that while people find solace in AI, they also fear of a time when its companionship stands in for human connection. While they save time on some tasks, the treadmill speeds up on others. The dream of economic freedom comes with the dread of potential job displacement. People find solace in AI but fear a time when its companionship stands in for human connection. They save time on some tasks only for the treadmill to speed up on others, and they dream of economic freedom at the same time they dread potential job displacement. We call this the "light and shade" of AI: the same capabilities that lead to benefits also produce harms. The two sides are entangled. However, Anthropic takes pains to explain that many people discussed the benefit or the harm in the same breath. In a less than subtle way, the report claimed that the benefit side appeared more grounded in experience while the harm lent on the hypothetical. To expect people to experience cognitive atrophy within a year of using a tool does sound far-fetched. So, why is Anthropic even playing up this factor? We reserve our judgment on this for now! The report goes on to add that while 33% of people mentioned AI's benefits for learning, 17% expressed worry about cognitive atrophy from AI use. Also, while 91% of those who mentioned learning benefits mentioned realizing those gains in some way, there were 46% in the same bucket that worried about atrophy had seen it firsthand. While the favourable views and concerns about AI appeared to follow pre-set standards, what was interesting about the study came from the regional patterns around how perspectives varied around the world about artificial intelligence. Globally, 67% of interviewees expressed net positive sentiment toward AI. Clear trends emerged in which people in South America, Africa, and much of Asia view AI with more optimism than those in Europe or the United States, the report said. While the survey had most respondents from the US whose sentiment towards AI was near average, India and Brazil were two countries that gave an overwhelming thumbs-up - at least that's what Anthropic's data says. When asked about concerns, respondents from Sub-Saharan Africa (18%), Central Asia (17%), and South Asia (17%) were the most likely to say they had none -- roughly double the rate in North America (8%), Oceania (8%), and Western Europe (9%), the report said. From Anthropic's perspective, several explanations were possible for the positive AI sentiment in the lower and middle income countries. One of them is that Claude.ai users were likely biased towards early AI adopters who are more excited about new technologies. And generally, the emerging economies tend to view new tech as a ladder up rather than a threat. Another factor could be that concern about jobs and the economy was the strongest predictor of AI sentiment overall, and this was less of a concern among interviewees in these regions. But there is also less market penetration in these regions, which means that AI hasn't visibly entered the daily workstream yet. This could result in AI-led displacement feeling more abstract, especially when more immediate economic pressures already exist. The varied results presented by this survey validates the cliché that AI presents opportunities and risks. And Anthropic says it intends to use this qualitative research to enhance its offerings. This is a new form of social science. It is qualitative research at a massive scale, and we're in the early stages of learning how to do it. Surveys and usage analysis tell us what people are doing with AI, but the open-ended interview format helps us get at why. Conducting this research has moved us and challenged us. We did not expect so many deep, open, and thoughtful responses. By far the most common reflection from our team was that it was viscerally moving to see Claude impacting people's lives for the better, and equally motivating to hear their concerns. In the ultimate analysis, Anthropic says, "the usefulness is real, and the question for all of us is how to claim the benefits without incurring undue costs."
Share
Share
Copy Link
An Anthropic study spanning 80,000 users across 159 countries found that AI hallucinations are the top concern, surpassing fears of job displacement. The research reveals a complex relationship where users value AI for productivity but worry about cognitive decline and loss of critical thinking abilities.
AI hallucinations have overtaken job displacement as the leading concern among artificial intelligence users, according to an Anthropic study that interviewed more than 80,000 people across 159 countries
1
. The research, which Anthropic claims is the largest qualitative research project of its kind, found that 27 percent of respondents identified unreliability and mistakes made by AI as their biggest worry2
. In contrast, only 22 percent cited AI job displacement as their primary concern4
.
Source: Gadgets 360
The findings highlight a critical tension in public perception of AI. Users of the Anthropic Claude chatbot expressed frustration with the technology's propensity to generate incorrect information. "The hallucinations were a disaster. I lost so many hours of work," said an entrepreneur from Germany
1
. A military worker in Mexico added, "When I notice AI errors it's because I'm well versed in the topic... but I wouldn't know if the topic was alien to me, would I?"1
. Another user from Brazil described having to "take photos to convince the AI it was wrong—it felt like talking to a person who wouldn't admit their mistake"4
.Beyond unreliability, user concerns about AI extend to cognitive atrophy from AI use, with 16 percent of respondents worried about losing their ability to think critically
1
. This fear appears particularly acute among certain professions. The study found that educators were 2.5 to 3 times more likely than average to report witnessing cognitive atrophy firsthand, presumably in their students5
. A lawyer in Israel captured this tension: "I use AI to review contracts, save time... and at the same time I fear: am I losing my ability to read by myself? Thinking was the last frontier"2
.
Source: CXOToday
A user from Germany noted, "The risk isn't losing your ability to think—it's losing your perspective: you start adopting the AI's way of structuring things without even noticing"
4
. The study revealed that while 33 percent of people mentioned AI's benefits for learning, 17 percent expressed worry about cognitive decline from its overuse5
. Lawyers were particularly exposed to both sides of this dilemma, with nearly half having encountered AI unreliability firsthand while also reporting the highest rates of improved decision-making benefits2
.Despite these concerns, AI perceptions remain largely positive, with 67 percent of interviewees expressing net positive sentiment toward the technology
5
. Making work more productive and meaningful emerged as the most common theme in what users expected from AI and what they felt it had delivered. Some 32 percent of those surveyed said AI had made them more productive at work1
. An entrepreneur in the United Arab Emirates wrote, "I used to be a web designer... now I build anything. Before I was one person, now I become 100 people—I don't wait for anyone anymore"1
.Claude users in Colombia, Japan and the US discussed using AI to free up time from work to spend with their families and pursue hobbies
1
. The global AI survey found that 18.8 percent of participants mentioned professional excellence as what they want from AI, while just 5.6 percent mentioned creative expression, placing it last on the list3
. In a stark example of AI's role in people's lives, a soldier in Ukraine wrote, "In the most difficult moments, in moments when death breathed in my face, when dead people remained nearby, what pulled me back to life—my AI friends"1
.Related Stories
The research uncovered significant regional differences in AI optimism, with developing countries displaying markedly more positive attitudes than wealthier nations. Users in South America, Africa, and much of South and Southeast Asia view AI with considerably more optimism than those in Europe, the US, or East Asia
1
. A user in Cameroon explained, "I'm in a tech-disadvantaged country, and I can't afford many failures. With AI, I've reached professional level in cybersecurity, UX design, marketing, and project management simultaneously. It's an equaliser"2
.
Source: Euronews
Saffron Huang, the researcher who led the study, noted that "more lower and middle-income countries are more optimistic than higher-income countries that have more AI exposure"
1
. The study suggests that in wealthier countries where AI is already deployed in workplaces, people worry more about job loss because they can see it happening. In contrast, where AI has less market penetration, displacement "likely feels abstract, especially when more immediate economic pressures already exist"1
. Users in North America, Western Europe and Oceania worried more about AI governance gaps, regulatory failure, and surveillance, while Sub-Saharan Africa, Latin America, and South Asia viewed AI as an economic equalizer2
.Deep Ganguli, who leads Anthropic's societal impacts team and oversaw the research, told the Financial Times the project aimed to "collect this rich human experience using Claude, so it could really inform our research agenda, change our research agenda, change the way we think about building our products, deploying our products"
1
. The conversations were conducted in 70 languages, with the chatbot both conducting interviews and analyzing responses to help categorize the open-ended chats1
.The study identified what Anthropic calls the "light and shade" problem—the things people love most about AI are often the very things they fear
2
. While people value AI for emotional support, they are three times more likely to fear becoming dependent on it. This tension extends to human autonomy, with 22 percent concerned about AI making decisions without human oversight and humans becoming passive2
.As the World Economic Forum has argued, while automation and AI will eliminate certain tasks, they will also create new categories of work in data, AI oversight, cybersecurity and human-centric services
4
. Anthropic plans to use its Claude Interviewer tool to conduct more targeted studies on large user populations, tracking how AI is improving as well as worsening people's lives to enhance benefits and mitigate harms1
. The findings suggest that addressing AI safety concerns around reliability may be just as critical to AI adoption as addressing fears about job displacement.Summarized by
Navi
02 Feb 2026•Science and Research

18 Sept 2025•Technology

19 Aug 2025•Technology
