2 Sources
[1]
How we really judge AI
Caption: A new study finds that people are neither entirely enthusiastic nor totally averse to AI. Rather than falling into camps of techno-optimists and Luddites, people are discerning about the practical upshot of using AI, case by case. Suppose you were shown that an artificial intelligence tool offers accurate predictions about some stocks you own. How would you feel about using it? Now, suppose you are applying for a job at a company where the HR department uses an AI system to screen resumes. Would you be comfortable with that? A new study finds that people are neither entirely enthusiastic nor totally averse to AI. Rather than falling into camps of techno-optimists and Luddites, people are discerning about the practical upshot of using AI, case by case. "We propose that AI appreciation occurs when AI is perceived as being more capable than humans and personalization is perceived as being unnecessary in a given decision context," says MIT Professor Jackson Lu, co-author of a newly published paper detailing the study's results. "AI aversion occurs when either of these conditions is not met, and AI appreciation occurs only when both conditions are satisfied." The paper, "AI Aversion or Appreciation? A Capability-Personalization Framework and a Meta-Analytic Review," appears in Psychological Bulletin. The paper has eight co-authors, including Lu, who is the Career Development Associate Professor of Work and Organization Studies at the MIT Sloan School of Management. New framework adds insight People's reactions to AI have long been subject to extensive debate, often producing seemingly disparate findings. An influential 2015 paper on "algorithm aversion" found that people are less forgiving of AI-generated errors than of human errors, whereas a widely noted 2019 paper on "algorithm appreciation" found that people preferred advice from AI, compared to advice from humans. To reconcile these mixed findings, Lu and his co-authors conducted a meta-analysis of 163 prior studies that compared people's preferences for AI versus humans. The researchers tested whether the data supported their proposed "Capability-Personalization Framework" -- the idea that in a given context, both the perceived capability of AI and the perceived necessity for personalization shape our preferences for either AI or humans. Across the 163 studies, the research team analyzed over 82,000 reactions to 93 distinct "decision contexts" -- for instance, whether or not participants would feel comfortable with AI being used in cancer diagnoses. The analysis confirmed that the Capability-Personalization Framework indeed helps account for people's preferences. "The meta-analysis supported our theoretical framework," Lu says. "Both dimensions are important: Individuals evaluate whether or not AI is more capable than people at a given task, and whether the task calls for personalization. People will prefer AI only if they think the AI is more capable than humans and the task is nonpersonal." He adds: "The key idea here is that high perceived capability alone does not guarantee AI appreciation. Personalization matters too." For example, people tend to favor AI when it comes to detecting fraud or sorting large datasets -- areas where AI's abilities exceed those of humans in speed and scale, and personalization is not required. But they are more resistant to AI in contexts like therapy, job interviews, or medical diagnoses, where they feel a human is better able to recognize their unique circumstances. "People have a fundamental desire to see themselves as unique and distinct from other people," Lu says. "AI is often viewed as impersonal and operating in a rote manner. Even if the AI is trained on a wealth of data, people feel AI can't grasp their personal situations. They want a human recruiter, a human doctor who can see them as distinct from other people." Context also matters: From tangibility to unemployment The study also uncovered other factors that influence individuals' preferences for AI. For instance, AI appreciation is more pronounced for tangible robots than for intangible algorithms. Economic context also matters. In countries with lower unemployment, AI appreciation is more pronounced. "It makes intuitive sense," Lu says. "If you worry about being replaced by AI, you're less likely to embrace it." Lu is continuing to examine people's complex and evolving attitudes toward AI. While he does not view the current meta-analysis as the last word on the matter, he hopes the Capability-Personalization Framework offers a valuable lens for understanding how people evaluate AI across different contexts. "We're not claiming perceived capability and personalization are the only two dimensions that matter, but according to our meta-analysis, these two dimensions capture much of what shapes people's preferences for AI versus humans across a wide range of studies," Lu concludes. In addition to Lu, the paper's co-authors are Xin Qin, Chen Chen, Hansen Zhou, Xiaowei Dong, and Limei Cao of Sun Yat-sen University; Xiang Zhou of Shenzhen University; and Dongyuan Wu of Fudan University. The research was supported, in part, by grants to Qin and Wu from the National Natural Science Foundation of China.
[2]
Forget techno-optimists vs. Luddites -- most people judge AI by perceived capability and personalization needs
Suppose you were shown that an artificial intelligence tool offers accurate predictions about some stocks you own. How would you feel about using it? Now, suppose you are applying for a job at a company where the HR department uses an AI system to screen resumes. Would you be comfortable with that? A new study finds that people are neither entirely enthusiastic nor totally averse to AI. Rather than falling into camps of techno-optimists and Luddites, people are discerning about the practical upshot of using AI, case by case. "We propose that AI appreciation occurs when AI is perceived as being more capable than humans and personalization is perceived as being unnecessary in a given decision context," says MIT Professor Jackson Lu, co-author of a newly published paper detailing the study's results. "AI aversion occurs when either of these conditions is not met, and AI appreciation occurs only when both conditions are satisfied." The paper, "AI Aversion or Appreciation? A Capability-Personalization Framework and a Meta-Analytic Review," appears in Psychological Bulletin. The paper has eight co-authors, including Lu, who is the Career Development Associate Professor of Work and Organization Studies at the MIT Sloan School of Management. New framework adds insight People's reactions to AI have long been subject to extensive debate, often producing seemingly disparate findings. An influential 2015 paper on "algorithm aversion" found that people are less forgiving of AI-generated errors than of human errors, whereas a widely noted 2019 paper on "algorithm appreciation" found that people preferred advice from AI, compared to advice from humans. To reconcile these mixed findings, Lu and his co-authors conducted a meta-analysis of 163 prior studies that compared people's preferences for AI versus humans. The researchers tested whether the data supported their proposed "Capability-Personalization Framework" -- the idea that in a given context, both the perceived capability of AI and the perceived necessity for personalization shape our preferences for either AI or humans. Across the 163 studies, the research team analyzed over 82,000 reactions to 93 distinct "decision contexts" -- for instance, whether or not participants would feel comfortable with AI being used in cancer diagnoses. The analysis confirmed that the Capability-Personalization Framework indeed helps account for people's preferences. "The meta-analysis supported our theoretical framework," Lu says. "Both dimensions are important: Individuals evaluate whether or not AI is more capable than people at a given task, and whether the task calls for personalization. People will prefer AI only if they think the AI is more capable than humans and the task is nonpersonal." He adds, "The key idea here is that high perceived capability alone does not guarantee AI appreciation. Personalization matters too." For example, people tend to favor AI when it comes to detecting fraud or sorting large datasets -- areas where AI's abilities exceed those of humans in speed and scale, and personalization is not required. But they are more resistant to AI in contexts like therapy, job interviews, or medical diagnoses, where they feel a human is better able to recognize their unique circumstances. "People have a fundamental desire to see themselves as unique and distinct from other people," Lu says. "AI is often viewed as impersonal and operating in a rote manner. Even if the AI is trained on a wealth of data, people feel AI can't grasp their personal situations. They want a human recruiter, a human doctor who can see them as distinct from other people." Context also matters: From tangibility to unemployment The study also uncovered other factors that influence individuals' preferences for AI. For instance, AI appreciation is more pronounced for tangible robots than for intangible algorithms. Economic context also matters. In countries with lower unemployment, AI appreciation is more pronounced. "It makes intuitive sense," Lu says. "If you worry about being replaced by AI, you're less likely to embrace it." Lu is continuing to examine people's complex and evolving attitudes toward AI. While he does not view the current meta-analysis as the last word on the matter, he hopes the Capability-Personalization Framework offers a valuable lens for understanding how people evaluate AI across different contexts. "We're not claiming perceived capability and personalization are the only two dimensions that matter, but according to our meta-analysis, these two dimensions capture much of what shapes people's preferences for AI versus humans across a wide range of studies," Lu concludes.
Share
Copy Link
A meta-analysis of 163 studies shows that people's preferences for AI versus humans depend on perceived AI capability and the need for personalization in specific contexts.
A groundbreaking study led by MIT Professor Jackson Lu has shed light on how people truly judge artificial intelligence (AI). The research, published in Psychological Bulletin, challenges the notion that individuals fall into simple categories of techno-optimists or Luddites when it comes to AI acceptance 12.
Lu and his colleagues propose a new "Capability-Personalization Framework" to explain people's preferences for AI versus humans. This framework suggests that AI appreciation occurs when two conditions are met:
"AI aversion occurs when either of these conditions is not met, and AI appreciation occurs only when both conditions are satisfied," explains Lu 1.
Source: Massachusetts Institute of Technology
To test their framework, the researchers conducted a meta-analysis of 163 previous studies, examining over 82,000 reactions across 93 distinct decision contexts. The analysis strongly supported their theoretical framework, revealing that both perceived capability and personalization needs play crucial roles in shaping people's preferences 12.
The study found that people tend to favor AI in contexts where its capabilities clearly surpass human abilities and personalization is not required. For example:
However, people are more resistant to AI in situations where they believe human understanding and personalization are crucial, such as:
Source: Tech Xplore
"People have a fundamental desire to see themselves as unique and distinct from other people," Lu notes. "Even if the AI is trained on a wealth of data, people feel AI can't grasp their personal situations" 12.
The research also uncovered other interesting factors that affect people's attitudes towards AI:
While the Capability-Personalization Framework provides valuable insights into AI acceptance, Lu acknowledges that it may not be the final word on the subject. He emphasizes that these two dimensions capture much of what shapes people's preferences for AI versus humans across a wide range of studies 12.
As AI continues to evolve and integrate into various aspects of our lives, understanding these nuanced attitudes will be crucial for developers, policymakers, and businesses seeking to implement AI solutions effectively and ethically.
The study's findings suggest that future AI development and implementation strategies should consider not only improving AI capabilities but also addressing people's needs for personalization in specific contexts to enhance acceptance and trust in AI technologies.
Summarized by
Navi
[1]
AMD CEO Lisa Su reveals new MI400 series AI chips and partnerships with major tech companies, aiming to compete with Nvidia in the rapidly growing AI chip market.
8 Sources
Technology
2 hrs ago
8 Sources
Technology
2 hrs ago
Meta has filed a lawsuit against Joy Timeline HK Limited, the developer of the AI 'nudify' app Crush AI, for repeatedly violating advertising policies on Facebook and Instagram. The company is also implementing new measures to combat the spread of AI-generated explicit content across its platforms.
17 Sources
Technology
10 hrs ago
17 Sources
Technology
10 hrs ago
Mattel, the iconic toy manufacturer, partners with OpenAI to incorporate artificial intelligence into toy-making and content creation, promising innovative play experiences while prioritizing safety and privacy.
14 Sources
Business and Economy
10 hrs ago
14 Sources
Business and Economy
10 hrs ago
A critical security flaw named "EchoLeak" was discovered in Microsoft 365 Copilot, allowing attackers to exfiltrate sensitive data without user interaction. The vulnerability highlights potential risks in AI-integrated systems.
5 Sources
Technology
19 hrs ago
5 Sources
Technology
19 hrs ago
Spanish AI startup Multiverse Computing secures $217 million in funding to advance its quantum-inspired AI model compression technology, promising to dramatically reduce the size and cost of running large language models.
5 Sources
Technology
10 hrs ago
5 Sources
Technology
10 hrs ago