4 Sources
[1]
Current AI risks more alarming than apocalyptic future scenarios
Most people generally are more concerned about the immediate risks of artificial intelligence than they are about a theoretical future in which AI threatens humanity. A new study by the University of Zurich reveals that respondents draw clear distinctions between abstract scenarios and specific tangible problems and particularly take the latter very seriously. There is a broad consensus that artificial intelligence is associated with risks, but there are differences in how those risks are understood and prioritized. One widespread perception emphasizes theoretical long-term risks such as that of AI potentially threatening the survival of humanity. Another common viewpoint focuses on immediate concerns such as how AI systems amplify social prejudices or contribute to disinformation. Some fear that emphasizing dramatic "existential risks" may distract attention from the more urgent actual present problems that AI is already causing today. Present and future AI risks To examine those views, a team of political scientists at the University of Zurich conducted three large-scale online experiments involving more than 10,000 participants in the USA and the UK. Some subjects were shown a variety of headlines that portrayed AI as a catastrophic risk. Others read about present threats such as discrimination or misinformation, and others about potential benefits of AI. The objective was to examine whether warnings about a catastrophe far off in the future caused by AI diminish alertness to actual present problems. Greater concern about present problems "Our findings show that the respondents are much more worried about present risks posed by AI than about potential future catastrophes," says Professor Fabrizio Gilardi from the Department of Political Science at UZH. Even if texts about existential threats amplified fears about scenarios of that kind, there was still much more concern about present problems including, for example, systematic bias in AI decisions and job losses due to AI. The study, however, also shows that people are capable of distinguishing between theoretical dangers and specific tangible problems and take both seriously. Conduct broad dialogue on AI risks The study thus fills a significant gap in knowledge. In public discussion, fears are often voiced that focusing on sensational future scenarios distracts attention from pressing present problems. The study is the first-ever to deliver systematic data showing that awareness of actual present threats persists even when people are confronted with apocalyptic warnings. "Our study shows that the discussion about long-term risks is not automatically occurring at the expense of alertness to present problems," co-author Emma Hoes says. Gilardi adds that "the public discourse shouldn't be 'either-or.' A concurrent understanding and appreciation of both the immediate and potential future challenges is needed."
[2]
Current AI risks more alarming than apocalyptic future scenarios, political scientists find
Most people generally are more concerned about the immediate risks of artificial intelligence than they are about a theoretical future in which AI threatens humanity. A new study by the University of Zurich reveals that respondents draw clear distinctions between abstract scenarios and specific tangible problems and particularly take the latter very seriously. There is a broad consensus that artificial intelligence is associated with risks, but there are differences in how those risks are understood and prioritized. One widespread perception emphasizes theoretical long-term risks such as that of AI potentially threatening the survival of humanity. Another common viewpoint focuses on immediate concerns such as how AI systems amplify social prejudices or contribute to disinformation. Some fear that emphasizing dramatic "existential risks" may distract attention from the more urgent actual present problems that AI is already causing today. Present and future AI risks To examine those views, a team of political scientists at the University of Zurich conducted three large-scale online experiments involving more than 10,000 participants in the U.S. and the UK. The findings are published in the journal Proceedings of the National Academy of Sciences. Some subjects were shown a variety of headlines that portrayed AI as a catastrophic risk. Others read about present threats such as discrimination or misinformation, and others about the potential benefits of AI. The objective was to examine whether warnings about a catastrophe far off in the future caused by AI diminish alertness to actual present problems. Greater concern about present problems "Our findings show that the respondents are much more worried about present risks posed by AI than about potential future catastrophes," says Professor Fabrizio Gilardi from the Department of Political Science at UZH. Even if texts about existential threats amplified fears about scenarios of that kind, there was still much more concern about present problems, including, for example, systematic bias in AI decisions and job losses due to AI. The study, however, also shows that people are capable of distinguishing between theoretical dangers and specific tangible problems and take both seriously. Conduct broad dialogue on AI risks The study thus fills a significant gap in knowledge. In public discussion, fears are often voiced that focusing on sensational future scenarios distracts attention from pressing present problems. The study is the first-ever to deliver systematic data showing that awareness of actual present threats persists even when people are confronted with apocalyptic warnings. "Our study shows that the discussion about long-term risks is not automatically occurring at the expense of alertness to present problems," co-author Emma Hoes says. Gilardi adds that "the public discourse shouldn't be 'either-or.' A concurrent understanding and appreciation of both the immediate and potential future challenges is needed."
[3]
People Worry More About Today's AI Harms Than Future Catastrophes - Neuroscience News
Summary: A new study finds that people are more concerned about the immediate risks of artificial intelligence, like job loss, bias, and disinformation, than they are about hypothetical future threats to humanity. Researchers exposed over 10,000 participants to different AI narratives and found that, while future catastrophes raise concern, real-world present dangers resonate more strongly. This challenges the idea that dramatic "doomsday" messaging distracts from urgent issues. The findings suggest the public is capable of holding nuanced views and supports a balanced conversation about both current and long-term AI risks. Most people generally are more concerned about the immediate risks of artificial intelligence than they are about a theoretical future in which AI threatens humanity. A new study by the University of Zurich reveals that respondents draw clear distinctions between abstract scenarios and specific tangible problems and particularly take the latter very seriously. There is a broad consensus that artificial intelligence is associated with risks, but there are differences in how those risks are understood and prioritized. One widespread perception emphasizes theoretical long-term risks such as that of AI potentially threatening the survival of humanity. Another common viewpoint focuses on immediate concerns such as how AI systems amplify social prejudices or contribute to disinformation. Some fear that emphasizing dramatic "existential risks" may distract attention from the more urgent actual present problems that AI is already causing today. Present and future AI risks To examine those views, a team of political scientists at the University of Zurich conducted three large-scale online experiments involving more than 10,000 participants in the USA and the UK. Some subjects were shown a variety of headlines that portrayed AI as a catastrophic risk. Others read about present threats such as discrimination or misinformation, and others about potential benefits of AI. The objective was to examine whether warnings about a catastrophe far off in the future caused by AI diminish alertness to actual present problems. Greater concern about present problems "Our findings show that the respondents are much more worried about present risks posed by AI than about potential future catastrophes," says Professor Fabrizio Gilardi from the Department of Political Science at UZH. Even if texts about existential threats amplified fears about scenarios of that kind, there was still much more concern about present problems including, for example, systematic bias in AI decisions and job losses due to AI. The study, however, also shows that people are capable of distinguishing between theoretical dangers and specific tangible problems and take both seriously. Conduct broad dialogue on AI risks The study thus fills a significant gap in knowledge. In public discussion, fears are often voiced that focusing on sensational future scenarios distracts attention from pressing present problems. The study is the first-ever to deliver systematic data showing that awareness of actual present threats persists even when people are confronted with apocalyptic warnings. "Our study shows that the discussion about long-term risks is not automatically occurring at the expense of alertness to present problems," co-author Emma Hoes says. Gilardi adds that "the public discourse shouldn't be 'either-or.' A concurrent understanding and appreciation of both the immediate and potential future challenges is needed." Existential Risk Narratives About Artificial Intelligence Do Not Distract From Its Immediate Harms There is broad consensus that AI presents risks, but considerable disagreement about the nature of those risks. These differing viewpoints can be understood as distinct narratives, each offering a specific interpretation of AI's potential dangers. One narrative focuses on doomsday predictions of AI posing long-term existential risks for humanity. Another narrative prioritizes immediate concerns that AI brings to society today, such as the reproduction of biases embedded into AI systems. A significant point of contention is that the "existential risk" narrative, which is largely speculative, may distract from the less dramatic but real and present dangers of AI. We address this "distraction hypothesis" by examining whether a focus on existential threats diverts attention from the immediate risks AI poses today. In three preregistered, online survey experiments (N = 10,800), participants were exposed to news headlines that either depicted AI as a catastrophic risk, highlighted its immediate societal impacts, or emphasized its potential benefits. Results show that i) respondents are much more concerned with the immediate, rather than existential, risks of AI, and ii) existential risk narratives increase concerns for catastrophic risks without diminishing the significant worries respondents express for immediate harms. These findings provide important empirical evidence to inform ongoing scientific and political debates on the societal implications of AI.
[4]
People fear AI taking jobs more than AI threatening humanity
Most people generally are more concerned about the immediate risks of artificial intelligence than they are about a theoretical future in which AI threatens humanity, researchers report. A new study by the University of Zurich (UZH) reveals that respondents draw clear distinctions between abstract scenarios and specific tangible problems and particularly take the latter very seriously. There is a broad consensus that artificial intelligence is associated with risks, but there are differences in how those risks are understood and prioritized. One widespread perception emphasizes theoretical long-term risks such as that of AI potentially threatening the survival of humanity. Another common viewpoint focuses on immediate concerns such as how AI systems amplify social prejudices or contribute to disinformation. Some fear that emphasizing dramatic "existential risks" may distract attention from the more urgent actual present problems that AI is already causing today. To examine those views, a team of political scientists conducted three large-scale online experiments involving more than 10,000 participants in the USA and the UK. Some subjects were shown a variety of headlines that portrayed AI as a catastrophic risk. Others read about present threats such as discrimination or misinformation, and others about potential benefits of AI. The objective was to examine whether warnings about a catastrophe far off in the future caused by AI diminish alertness to actual present problems. "Our findings show that the respondents are much more worried about present risks posed by AI than about potential future catastrophes," says Professor Fabrizio Gilardi from the political science department at UZH. Even if texts about existential threats amplified fears about scenarios of that kind, there was still much more concern about present problems including, for example, systematic bias in AI decisions and job losses due to AI. The study, however, also shows that people are capable of distinguishing between theoretical dangers and specific tangible problems and take both seriously. The study thus fills a significant gap in knowledge. In public discussion, fears are often voiced that focusing on sensational future scenarios distracts attention from pressing present problems. The study is the first-ever to deliver systematic data showing that awareness of actual present threats persists even when people are confronted with apocalyptic warnings. "Our study shows that the discussion about long-term risks is not automatically occurring at the expense of alertness to present problems," coauthor Emma Hoes says. Gilardi adds that "the public discourse shouldn't be "either-or." "A concurrent understanding and appreciation of both the immediate and potential future challenges is needed."
Share
Copy Link
A new study by the University of Zurich finds that people are more worried about current AI risks like job loss and bias than potential future threats to humanity, challenging the notion that apocalyptic scenarios distract from pressing issues.
A groundbreaking study conducted by political scientists at the University of Zurich has shed light on public perceptions of artificial intelligence (AI) risks. The research, published in the Proceedings of the National Academy of Sciences, challenges the notion that focusing on long-term existential threats distracts from immediate AI-related concerns 12.
The study involved three large-scale online experiments with over 10,000 participants from the United States and the United Kingdom. Researchers exposed subjects to various narratives about AI, including catastrophic risks, present threats, and potential benefits 123.
Professor Fabrizio Gilardi, lead researcher from the Department of Political Science at UZH, stated, "Our findings show that the respondents are much more worried about present risks posed by AI than about potential future catastrophes" 1. The study revealed that participants were particularly concerned about:
Contrary to concerns that apocalyptic scenarios might overshadow current issues, the study found that people can distinguish between theoretical dangers and tangible problems, taking both seriously 2. Co-author Emma Hoes emphasized, "Our study shows that the discussion about long-term risks is not automatically occurring at the expense of alertness to present problems" 13.
The research fills a significant knowledge gap by providing systematic data on how different AI narratives affect public perception. It suggests that the public discourse on AI risks should not be an "either-or" debate 4. Professor Gilardi advocated for "a concurrent understanding and appreciation of both the immediate and potential future challenges" 123.
The study comes amid growing concerns about AI's societal impact. While some experts warn about long-term existential risks, others focus on immediate issues like privacy concerns, algorithmic bias, and the potential for AI to exacerbate social inequalities 34.
This research provides valuable insights for policymakers and AI developers. It suggests that addressing current AI-related problems should be a priority, without neglecting potential long-term risks. Future studies may need to explore how public perception influences AI policy development and implementation 24.
As AI continues to advance rapidly, maintaining a balanced approach to risk assessment and mitigation will be crucial for harnessing its benefits while safeguarding against both immediate and potential future threats.
NVIDIA CEO Jensen Huang confirms the development of the company's most advanced AI architecture, 'Rubin', with six new chips currently in trial production at TSMC.
2 Sources
Technology
22 hrs ago
2 Sources
Technology
22 hrs ago
Databricks, a leading data and AI company, is set to acquire machine learning startup Tecton to bolster its AI agent offerings. This strategic move aims to improve real-time data processing and expand Databricks' suite of AI tools for enterprise customers.
3 Sources
Technology
22 hrs ago
3 Sources
Technology
22 hrs ago
Google is providing free users of its Gemini app temporary access to the Veo 3 AI video generation tool, typically reserved for paying subscribers, for a limited time this weekend.
3 Sources
Technology
14 hrs ago
3 Sources
Technology
14 hrs ago
Broadcom's stock rises as the company capitalizes on the AI boom, driven by massive investments from tech giants in data infrastructure. The chipmaker faces both opportunities and challenges in this rapidly evolving landscape.
2 Sources
Technology
22 hrs ago
2 Sources
Technology
22 hrs ago
Apple is set to introduce new enterprise-focused AI tools, including ChatGPT configuration options and potential support for other AI providers, as part of its upcoming software updates.
2 Sources
Technology
22 hrs ago
2 Sources
Technology
22 hrs ago