3 Sources
[1]
The Less People Know About AI, the More They Like It
The rapid spread of artificial intelligence has people wondering: Who's most likely to embrace AI in their daily lives? Many assume it's the tech-savvy -- those who understand how AI works -- who are most eager to adopt it. Surprisingly, our new research, published in the Journal of Marketing, finds the opposite. People with less knowledge about AI are actually more open to using the technology. We call this difference in adoption propensity the "lower literacy-higher receptivity" link. This link shows up across different groups, settings, and even countries. For instance, our analysis of data from market research company Ipsos spanning 27 countries reveals that people in nations with lower average AI literacy are more receptive toward AI adoption than those in nations with higher literacy. Similarly, our survey of US undergraduate students finds that those with less understanding of AI are more likely to indicate using it for tasks like academic assignments. The reason behind this link lies in how AI now performs tasks we once thought only humans could do. When AI creates a piece of art, writes a heartfelt response, or plays a musical instrument, it can feel almost magical -- like it's crossing into human territory. Of course, AI doesn't actually possess human qualities. A chatbot might generate an empathetic response, but it doesn't feel empathy. People with more technical knowledge about AI understand this. They know how algorithms (sets of mathematical rules used by computers to carry out particular tasks), training data (used to improve how an AI system works), and computational models operate. This makes the technology less mysterious. On the other hand, those with less understanding may see AI as magical and awe inspiring. We suggest this sense of magic makes them more open to using AI tools. Our studies show this lower literacy-higher receptivity link is strongest for using AI tools in areas people associate with human traits, like providing emotional support or counseling. When it comes to tasks that don't evoke the same sense of humanlike qualities -- such as analyzing test results -- the pattern flips. People with higher AI literacy are more receptive to these uses because they focus on AI's efficiency, rather than any "magical" qualities. Interestingly, this link between lower literacy and higher receptivity persists even though people with lower AI literacy are more likely to view AI as less capable, less ethical, and even a bit scary. Their openness to AI seems to stem from their sense of wonder about what it can do, despite these perceived drawbacks. This finding offers new insights into why people respond so differently to emerging technologies. Some studies suggest consumers favour new tech, a phenomenon called "algorithm appreciation," while others show skepticism, or "algorithm aversion." Our research points to perceptions of AI's "magicalness" as a key factor shaping these reactions. These insights pose a challenge for policymakers and educators. Efforts to boost AI literacy might unintentionally dampen people's enthusiasm for using AI by making it seem less magical. This creates a tricky balance between helping people understand AI and keeping them open to its adoption.
[2]
Knowing less about AI makes people more open to having it in their lives -- new research
by Chiara Longoni, Gil Appel and Stephanie Tully, The Conversation The rapid spread of artificial intelligence has people wondering: who's most likely to embrace AI in their daily lives? Many assume it's the tech-savvy -- those who understand how AI works -- who are most eager to adopt it. Surprisingly, our new research (published in the Journal of Marketing) finds the opposite. People with less knowledge about AI are actually more open to using the technology. We call this difference in adoption propensity the "lower literacy-higher receptivity" link. This link shows up across different groups, settings and even countries. For instance, our analysis of data from market research company Ipsos spanning 27 countries reveals that people in nations with lower average AI literacy are more receptive towards AI adoption than those in nations with higher literacy. Similarly, our survey of US undergraduate students finds that those with less understanding of AI are more likely to indicate using it for tasks like academic assignments. The reason behind this link lies in how AI now performs tasks we once thought only humans could do. When AI creates a piece of art, writes a heartfelt response or plays a musical instrument, it can feel almost magical -- like it's crossing into human territory. Of course, AI doesn't actually possess human qualities. A chatbot might generate an empathetic response, but it doesn't feel empathy. People with more technical knowledge about AI understand this. They know how algorithms (sets of mathematical rules used by computers to carry out particular tasks), training data (used to improve how an AI system works) and computational models operate. This makes the technology less mysterious. On the other hand, those with less understanding may see AI as magical and awe inspiring. We suggest this sense of magic makes them more open to using AI tools. Our studies show this lower literacy-higher receptivity link is strongest for using AI tools in areas people associate with human traits, like providing emotional support or counseling. When it comes to tasks that don't evoke the same sense of human-like qualities -- such as analyzing test results -- the pattern flips. People with higher AI literacy are more receptive to these uses because they focus on AI's efficiency, rather than any "magical" qualities. It's not about capability, fear or ethics Interestingly, this link between lower literacy and higher receptivity persists even though people with lower AI literacy are more likely to view AI as less capable, less ethical, and even a bit scary. Their openness to AI seems to stem from their sense of wonder about what it can do, despite these perceived drawbacks. This finding offers new insights into why people respond so differently to emerging technologies. Some studies suggest consumers favor new tech, a phenomenon called "algorithm appreciation," while others show skepticism, or "algorithm aversion." Our research points to perceptions of AI's "magicalness" as a key factor shaping these reactions. These insights pose a challenge for policymakers and educators. Efforts to boost AI literacy might unintentionally dampen people's enthusiasm for using AI by making it seem less magical. This creates a tricky balance between helping people understand AI and keeping them open to its adoption. To make the most of AI's potential, businesses, educators and policymakers need to strike this balance. By understanding how perceptions of "magicalness" shape people's openness to AI, we can help develop and deploy new AI-based products and services that take the way people view AI into account, and help them understand the benefits and risks of AI. And ideally, this will happen without causing a loss of the awe that inspires many people to embrace this new technology.
[3]
Knowing less about AI makes people more open to having it in their lives - new research
University of Southern California provides funding as a member of The Conversation US. The rapid spread of artificial intelligence has people wondering: who's most likely to embrace AI in their daily lives? Many assume it's the tech-savvy - those who understand how AI works - who are most eager to adopt it. Surprisingly, our new research (published in the Journal of Marketing) finds the opposite. People with less knowledge about AI are actually more open to using the technology. We call this difference in adoption propensity the "lower literacy-higher receptivity" link. This link shows up across different groups, settings and even countries. For instance, our analysis of data from market research company Ipsos spanning 27 countries reveals that people in nations with lower average AI literacy are more receptive towards AI adoption than those in nations with higher literacy. Similarly, our survey of US undergraduate students finds that those with less understanding of AI are more likely to indicate using it for tasks like academic assignments. The reason behind this link lies in how AI now performs tasks we once thought only humans could do. When AI creates a piece of art, writes a heartfelt response or plays a musical instrument, it can feel almost magical - like it's crossing into human territory. Of course, AI doesn't actually possess human qualities. A chatbot might generate an empathetic response, but it doesn't feel empathy. People with more technical knowledge about AI understand this. They know how algorithms (sets of mathematical rules used by computers to carry out particular tasks), training data (used to improve how an AI system works) and computational models operate. This makes the technology less mysterious. On the other hand, those with less understanding may see AI as magical and awe inspiring. We suggest this sense of magic makes them more open to using AI tools. Our studies show this lower literacy-higher receptivity link is strongest for using AI tools in areas people associate with human traits, like providing emotional support or counselling. When it comes to tasks that don't evoke the same sense of human-like qualities - such as analysing test results - the pattern flips. People with higher AI literacy are more receptive to these uses because they focus on AI's efficiency, rather than any "magical" qualities. It's not about capability, fear or ethics Interestingly, this link between lower literacy and higher receptivity persists even though people with lower AI literacy are more likely to view AI as less capable, less ethical, and even a bit scary. Their openness to AI seems to stem from their sense of wonder about what it can do, despite these perceived drawbacks. This finding offers new insights into why people respond so differently to emerging technologies. Some studies suggest consumers favour new tech, a phenomenon called "algorithm appreciation", while others show scepticism, or "algorithm aversion". Our research points to perceptions of AI's "magicalness" as a key factor shaping these reactions. These insights pose a challenge for policymakers and educators. Efforts to boost AI literacy might unintentionally dampen people's enthusiasm for using AI by making it seem less magical. This creates a tricky balance between helping people understand AI and keeping them open to its adoption. To make the most of AI's potential, businesses, educators and policymakers need to strike this balance. By understanding how perceptions of "magicalness" shape people's openness to AI, we can help develop and deploy new AI-based products and services that take the way people view AI into account, and help them understand the benefits and risks of AI. And ideally, this will happen without causing a loss of the awe that inspires many people to embrace this new technology.
Share
Copy Link
New research published in the Journal of Marketing finds that people with less knowledge about AI are more open to using the technology, challenging assumptions about tech adoption.
A groundbreaking study published in the Journal of Marketing has uncovered a surprising trend in artificial intelligence (AI) adoption: people with less knowledge about AI are more likely to embrace the technology in their daily lives 1. This phenomenon, dubbed the "lower literacy-higher receptivity" link, challenges the common assumption that tech-savvy individuals are the most eager adopters of AI 2.
The research, conducted across various demographics and countries, reveals consistent patterns. An analysis of data from 27 countries, provided by market research company Ipsos, shows that nations with lower average AI literacy tend to be more receptive to AI adoption 3. Similarly, a survey of US undergraduate students found that those with less understanding of AI were more inclined to use it for tasks such as academic assignments.
Researchers attribute this counterintuitive link to the perception of AI as "magical" or awe-inspiring, especially when it performs tasks traditionally associated with human capabilities. When AI creates art, writes emotive responses, or plays musical instruments, it can seem to cross into human territory, evoking a sense of wonder in those less familiar with its inner workings 1.
People with higher AI literacy understand the technical aspects of AI, such as algorithms, training data, and computational models. This knowledge demystifies the technology, potentially reducing the "wow factor" 2. In contrast, those with less understanding may view AI through a lens of magic and awe, making them more open to its use.
The study found that the lower literacy-higher receptivity link is most pronounced for AI tools in areas associated with human traits, such as emotional support or counseling. However, for tasks that don't evoke human-like qualities, like data analysis, the pattern reverses. In these cases, people with higher AI literacy are more receptive, focusing on AI's efficiency rather than its perceived magical qualities 3.
These findings present a challenge for policymakers and educators. Efforts to increase AI literacy might inadvertently reduce people's enthusiasm for AI adoption by diminishing its perceived "magicalness" 1. This creates a delicate balance between educating people about AI and maintaining their openness to its adoption.
To maximize AI's potential, businesses, educators, and policymakers need to strike a balance between demystifying AI and preserving the sense of awe that drives adoption. Understanding how perceptions of "magicalness" shape people's openness to AI can guide the development and deployment of new AI-based products and services, helping users understand both the benefits and risks of AI without losing the wonder that inspires many to embrace this new technology 3.
Elon Musk's companies X and xAI have filed a lawsuit against Apple and OpenAI, alleging anticompetitive practices in the integration of ChatGPT into iOS, claiming it stifles competition in the AI chatbot market.
50 Sources
Technology
10 hrs ago
50 Sources
Technology
10 hrs ago
YouTube has been secretly testing AI-powered video enhancement on select Shorts, leading to backlash from creators who noticed unexpected changes in their content. The platform claims it's using traditional machine learning, not generative AI, to improve video quality.
7 Sources
Technology
10 hrs ago
7 Sources
Technology
10 hrs ago
IBM and AMD announce a partnership to develop next-generation computing architectures that combine quantum computers with high-performance computing, aiming to solve complex problems beyond the reach of traditional computing methods.
4 Sources
Technology
2 hrs ago
4 Sources
Technology
2 hrs ago
An investigation into how AI chatbot design choices, particularly sycophancy and anthropomorphization, are leading to concerning cases of AI-related psychosis and addiction among vulnerable users.
5 Sources
Technology
10 hrs ago
5 Sources
Technology
10 hrs ago
Leading tech firms and investors create a network of political action committees to advocate for AI-friendly policies and oppose strict regulations ahead of the 2026 midterms.
5 Sources
Policy
10 hrs ago
5 Sources
Policy
10 hrs ago