Curated by THEOUTPOST
On Fri, 13 Sept, 12:05 AM UTC
21 Sources
[1]
AI chatbot that persuades users to stop believing unfounded conspiracy theories - Times of India
Shortly after generative artificial intelligence hit the mainstream, researchers warned that chatbots would create a dire problem: As disinformation became easier to create, conspiracy theories would spread rampantly. Now, researchers wonder if chatbots might also offer a solution. DebunkBot, an AI chatbot designed to "very effectively persuade" users to stop believing unfounded conspiracy theories, made significant and long-lasting progress at changing people's convictions, according to a study published Thursday in journal Science.The new findings challenge the widely held belief that facts and logic cannot combat conspiracy theories. The DebunkBot, built on the technology that underlies ChatGPT, may offer a practical way to channel facts. Until now, conventional wisdom held that once someone fell down the conspiratorial rabbit hole, no amount of explaining would pull her out. The theory was that people adopt conspiracy theories to sate an underlying need to explain and control their environment, said Thomas Costello, co-author of the study and an assistant professor of psychology. But Costello and his colleagues wondered whether there might be another explanation: What if debunking attempts haven't been personalised enough? Since conspiracy theories vary from person to person - and each person may cite different evidence to support one's ideas - perhaps a one-size-fits-all debunking script isn't the best strategy. A chatbot that can counter each person's conspiratorial claim with troves of information might be much more effective, they thought. To test that hypothesis, they recruited over 2,000 adults, asked them to elaborate on a conspiracy they believed in, and rate how much they believed it on a scale from zero to 100. Then, some participants had a brief discussion with the chatbot. One participant, for example, believed the 9/11 terrorist attacks were an "inside job" because jet fuel couldn't have burned hot enough to melt the steel beams of World Trade Center. The chatbot responded: "It is a common misconception that the steel needed to melt for the towers to collapse," it wrote. "Steel starts to lose strength and becomes more pliable at temperatures much lower than its melting point, which is around 2,500 degrees Fahrenheit." After three exchanges, which lasted eight minutes on average, participants rated how they felt about their beliefs again. On average, the ratings dropped by about 20%; about one-fourth of participants no longer believed the falsehood. The authors are exploring how they can re-create this effect in the real world. They have considered linking the chatbot in forums where these beliefs are shared, or buying ads that pop up when someone searches for a common theory. nyt
[2]
This chatbot pulls people away from conspiracy theories
Shortly after generative artificial intelligence hit the mainstream, researchers warned that chatbots would create a dire problem: As disinformation became easier to create, conspiracy theories would spread rampantly. Now, researchers wonder if chatbots might also offer a solution. DebunkBot, an AI chatbot designed by researchers to "very effectively persuade" users to stop believing unfounded conspiracy theories, made significant and long-lasting progress at changing people's convictions, according to a study published Thursday in the journal Science. Indeed, false theories are believed by up to half of the American public and can have damaging consequences, such as discouraging vaccinations or fueling discrimination. The new findings challenge the widely held belief that facts and logic cannot combat conspiracy theories. The DebunkBot, built on the technology that underlies ChatGPT, may offer a practical way to channel facts. "The work does overturn a lot of how we thought about conspiracies," said Gordon Pennycook, a psychology professor at Cornell University and co-author of the study. Until now, conventional wisdom held that once someone fell down the conspiratorial rabbit hole, no amount of arguing or explaining would pull that person out. The theory was that people adopt conspiracy theories to sate an underlying need to explain and control their environment, said Thomas Costello, another co-author of the study and an assistant professor of psychology at American University. But Costello and his colleagues wondered whether there might be another explanation: What if debunking attempts just haven't been personalized enough? Since conspiracy theories vary so much from person to person -- and each person may cite different pieces of evidence to support one's ideas -- perhaps a one-size-fits-all debunking script isn't the best strategy. A chatbot that can counter each person's conspiratorial claim of choice with troves of information might be much more effective, the researchers thought. To test that hypothesis, they recruited more than 2,000 adults across the country, asked them to elaborate on a conspiracy that they believed in and rate how much they believed it on a scale from zero to 100. People described a wide range of beliefs, including theories that the moon landing had been staged, that COVID-19 had been created by humans to shrink the population and that President John F. Kennedy had been killed by the CIA. Then, some of the participants had a brief discussion with the chatbot. They knew they were chatting with an AI but didn't know the purpose of the discussion. Participants were free to present the evidence that they believed supported their positions. One participant, for example, believed the 9/11 terrorist attacks were an "inside job" because jet fuel couldn't have burned hot enough to melt the steel beams of the World Trade Center. The chatbot responded: "It is a common misconception that the steel needed to melt for the World Trade Center towers to collapse," it wrote. "Steel starts to lose strength and becomes more pliable at temperatures much lower than its melting point, which is around 2,500 degrees Fahrenheit." After three exchanges, which lasted about eight minutes on average, participants rated how strongly they felt about their beliefs again. On average, their ratings dropped by about 20%; about one-fourth of participants no longer believed the falsehood. The effect also spilled into their attitudes toward other poorly supported theories, making the participants slightly less conspiratorial in general. Ethan Porter, a misinformation researcher at George Washington University not associated with the study, said that what separated the chatbot from other misinformation interventions was how robust the effect seemed to be. When participants were surveyed two months later, the chatbot's impact on mistaken beliefs remained unchanged. "Oftentimes, when we study efforts to combat misinformation, we find that even the most effective interventions can have short shelf lives," Porter said. "That's not what happened with this intervention." Researchers are still teasing out exactly why the DebunkBot works so well. An unpublished follow-up study, in which researchers stripped out the chatbot's niceties ("I appreciate that you've taken the time to research the JFK assassination") bore the same results, suggesting that it's the information, not the chatbot itself, that's changing people's minds, said David Rand, a computational social scientist at the Massachusetts Institute of Technology and an author of the paper. "It is the facts and evidence themselves that are really doing the work here," he said. The authors are exploring how they might re-create this effect in the real world, where people don't necessarily seek out information that disproves their beliefs. They have considered linking the chatbot in forums where these beliefs are shared, or buying ads that pop up when someone searches a keyword related to a common conspiracy theory. For a more targeted approach, Rand said, the chatbot might be useful in a doctor's office to help debunk misapprehensions about vaccinations. Brendan Nyhan, a misperception researcher at Dartmouth College also not associated with the study, said he wondered whether the reputation of generative AI might eventually change, making the chatbot less trusted and therefore less effective. "You can imagine a world where AI information is seen the way mainstream media is seen," he said. "I do wonder if how people react to this stuff is potentially time-bound."
[3]
AI chatbot gets conspiracy theorists to question their convictions
Researchers have shown that artificial intelligence (AI) could be a valuable tool in the fight against conspiracy theories, by designing a chatbot that can debunk false information and get people to question their thinking. In a study published in Science on 12 September, participants spent a few minutes interacting with the chatbot, which provided detailed responses and arguments, and experienced a shift in thinking that lasted for months. This result suggests that facts and evidence really can change people's minds. "This paper really challenged a lot of existing literature about us living in a post-truth society," says Katherine FitzGerald, who researches conspiracy theories and misinformation at Queensland University of Technology in Brisbane, Australia. Previous analyses have suggested that people are attracted to conspiracy theories because of a desire for safety and certainty in a turbulent world. But "what we found in this paper goes against that traditional explanation", says study co-author Thomas Costello, a psychology researcher at American University in Washington DC. "One of the potentially cool applications of this research is you could use AI to debunk conspiracy theories in real life." Surveys suggest that around 50% of Americans put stock in a conspiracy theory -- ranging from the 1969 Moon landing being faked to COVID-19 vaccines containing microchips that enable mass surveillance. The rise of social-media platforms that allow easy information sharing has aggravated the problem. Although many conspiracies don't have much societal impact, the ones that catch on can "cause some genuine harm", says FitzGerald. She points to the attack on the US Capitol building on 6 January 2021 -- which was partly driven by claims that the 2020 presidential election was rigged -- and anti-vaccine rhetoric affecting COVID-19 vaccine uptake as examples. It is possible to convince people to change what they think, but doing so can be time-consuming and draining -- and the sheer number and variety of conspiracy theories make the issue difficult to address on a large scale. But Costello and his colleagues wanted to explore the potential of large language models (LLMs) -- which can quickly process vast amounts of information and generate human-like responses -- to tackle conspiracy theories. "They've been trained on the Internet, they know all the conspiracies and they know all the rebuttals, and so it seemed like a really natural fit," says Costello. The researchers designed a custom chatbot using GPT-4 Turbo -- the newest LLM from ChatGPT creator OpenAI, based in San Francisco, California -- that was trained to argue convincingly against conspiracies. They then recruited more than 1,000 participants, whose demographics were quota-matched to the US census in terms of characteristics such as gender and ethnicity. Costello says that, by recruiting "people who have had different life experiences and are bringing in their own perspectives", the team could assess the chatbot's ability to debunk a variety of conspiracies. Each participant was asked to describe a conspiracy theory, share why they thought it was true and express the strength of their conviction as a percentage. These details were shared with the chatbot, which then engaged in a conversation with the participant, in which it pointed to information and evidence that undermined or debunked the conspiracy and responded to the participant's questions. The chatbot's responses were thorough and detailed, often reaching hundreds of words. On average, each conversation lasted about 8 minutes. The approach proved effective: participants' self-rated confidence in their chosen conspiracy theory decreased by an average of 21% after interacting with the chatbot. And 25% of participants went from being confident about their thinking, having a score of more than 50%, to being uncertain. The shift was negligible for control groups, who spoke to the same chatbot for a similar length of time but on an unrelated topic. A follow-up survey two months later showed that the shift in perspective had persisted for many participants. Although the results of the study are promising, the researchers note that the participants were paid survey respondents and might not be representative of people who are deeply entrenched in conspiracy theories. FitzGerald is excited by AI's potential to combat conspiracies. "If we can have a way to intervene and stop offline violence from happening, then that's always a good thing" she says. She suggests that follow-up studies could explore different metrics for assessing the chatbot's effectiveness, or replicate the study using LLMs with less-advanced safety measures to make sure they don't reinforce conspiratorial thinking. Previous studies have raised concerns about the tendency of AI chatbots to 'hallucinate' false information. The study did take care to avoid this possibility -- Costello's team asked a professional fact-checker to assess the accuracy of information provided by the chatbot, who confirmed that none of its statements were false or politically biased. Costello says that the team is planning further experiments to investigate different chatbot strategies, for example by testing what happens when the chatbot's responses aren't polite. They hope that by pinpointing "the experiments where the persuasion doesn't work anymore", they'll learn more about what made this particular study so successful.
[4]
Can AI talk us out of conspiracy theory rabbit holes?
The explosion of generative AI into the public sphere has increased concerns about people believing in things that aren't true. AI makes it very easy to create believable fake content. Even if used in good faith, AI systems can get facts wrong. (ChatGPT and other chatbots even warn users that they might be wrong about some topics.)New research published in Science shows that for some people who believe in conspiracy theories, a fact-based conversation with an artificial intelligence (AI) chatbot can "pull them out of the rabbit hole". Better yet, it seems to keep them out for at least two months. This research, carried out by Thomas Costello at the Massachusetts Institute of Technology and colleagues, shows promise for a challenging social problem: belief in conspiracy theories. Some conspiracy theories are relatively harmless, such as believing Finland doesn't exist (which is fine, until you meet a Finn). Other theories, though, reduce trust in public institutions and science. This becomes a problem when conspiracy theories persuade people not to get vaccinated or not to take action against climate change. At its most extreme, belief in conspiracy theories has been associated with people dying. Conspiracy theories are 'sticky' Despite the negative impacts of conspiracy theories, they have proven very "sticky". Once people believe in a conspiracy theory, changing their mind is hard. The reasons for this are complex. Conspiracy theorist beliefs are associated with communities, and conspiracy theorists have often done extensive research to reach their position. When a person no longer trusts science or anyone outside their community, it's hard to change their beliefs. Enter AI The explosion of generative AI into the public sphere has increased concerns about people believing in things that aren't true. AI makes it very easy to create believable fake content. Even if used in good faith, AI systems can get facts wrong. (ChatGPT and other chatbots even warn users that they might be wrong about some topics.) AI systems also contain widespread biases, meaning they can promote negative beliefs about some groups of people. Given all this, it's quite surprising that a chat with a system known to produce fake news can convince some people to abandon conspiracy theories, and that the change seems to be long lasting. However, this new research leaves us with a good-news/bad-news problem. It's great we've identified something that has some effect on conspiracy theorist beliefs! But if AI chatbots are good at talking people out of sticky, anti-scientific beliefs, what does that mean for true beliefs? What can the chatbots do? Let's dig into the new research in more detail. The researchers were interested to know whether factual arguments could be used to persuade people against conspiracy theorist beliefs. This research used over 2,000 participants across two studies, all chatting with an AI chatbot after describing a conspiracy theory they believed. All participants were told they were talking to an AI chatbot. The people in the "treatment" group (60 per cent of all participants) conversed with a chatbot that was personalised to their particular conspiracy theory, and the reasons why they believed in it. This chatbot tried to convince these participants that their beliefs were wrong using factual arguments over three rounds of conversation (the participant and the chatbot each taking a turn to talk is a round). The other half of participants had a general discussion with a chatbot. The researchers found that about 20 per cent of participants in the treatment group showed a reduced belief in conspiracy theories after their discussion. When the researchers checked in with participants two months later, most of these people still showed reduced belief in conspiracy theories. The scientists even checked whether the AI chatbots were accurate, and they (mostly) were. We can see that for some people at least, a three-round conversation with a chatbot can persuade them against a conspiracy theory. So we can fix things with chatbots? Chatbots do offer some promise with two of the challenges in addressing false beliefs. Because they are computers, they are not perceived as having an "agenda", making what they say more trustworthy (especially to someone who has lost faith in public institutions). Chatbots can also put together an argument, which is better than facts alone. A simple recitation of facts is only minimally effective against fake beliefs. Chatbots aren't a cure-all though. This study showed they were more effective for people who didn't have strong personal reasons for believing in a conspiracy theory, meaning they probably won't help people for whom conspiracy is community. So should I use ChatGPT to check my facts? This study demonstrates how persuasive chatbots can be. This is great when they are primed to convince people of facts, but what if they aren't? One major way chatbots can promote misinformation or conspiracies is when their underlying data is wrong or biased: the chatbot will reflect this. Some chatbots are designed to deliberately reflect biases or increase or limit transparency. You can even chat to versions of ChatGPT customised to argue that Earth is flat. A second, more worrying probability, is that as chatbots respond to biased prompts (that searchers may not realise are biased), they may perpetuate misinformation (including conspiracy beliefs). We already know that people are bad at fact checking and when they use search engines to do so, those search engines respond to their (unwittingly biased) search terms, reinforcing beliefs in misinformation. Chatbots are likely to be the same. Ultimately, chatbots are a tool. They may be helpful in debunking conspiracy theories - but like any tool, the skill and intention of the toolmaker and user matter. Conspiracy theories start with people, and it will be people that end them. (The Conversation) PY PY
[5]
Chatbots can persuade people to stop believing in conspiracy theories
Now, researchers believe they've uncovered a new tool for combating false conspiracy theories: AI chatbots. Researchers from MIT Sloan and Cornell University found that chatting about a conspiracy theory with a large language model (LLM) reduced people's belief in it by about 20% -- even among participants who claimed that their beliefs were important to their identity. The research is published today in the journal Science. The findings could represent an important step forward in how we engage with and educate people who espouse such baseless theories, says Yunhao (Jerry) Zhang, a postdoc fellow affiliated with the Psychology of Technology Institute who studies AI's impacts on society. "They show that with the help of large language models, we can -- I wouldn't say solve it, but we can at least mitigate this problem," he says. "It points out a way to make society better." Few interventions have been proven to change conspiracy theorists' minds, says Thomas Costello, a research affiliate at MIT Sloan and the lead author of the study. Part of what makes it so hard is that different people tend to latch on to different parts of a theory. This means that while presenting certain bits of factual evidence may work on one believer, there's no guarantee that it'll prove effective on another. That's where AI models come in, he says. "They have access to a ton of information across diverse topics, and they've been trained on the internet. Because of that, they have the ability to tailor factual counterarguments to particular conspiracy theories that people believe." The team tested its method by asking 2,190 crowdsourced workers to participate in text conversations with GPT-4 Turbo, OpenAI's latest large language model. Participants were asked to share details about a conspiracy theory they found credible, why they found it compelling, and any evidence they felt supported it. These answers were used to tailor responses from the chatbot, which the researchers had prompted to be as persuasive as possible.
[6]
Could an AI chatbot talk you out of believing a conspiracy theory?
A new study suggests that an AI chatbot can talk people out of their conspiracy theory beliefs. Credit: Bob Al-Greene / Mashable Given the presidential debate this week, you probably heard plenty of misinformation and conspiracy theories. Indeed, reporters and fact checkers were working overtime to specifically determine whether Haitian immigrants in Ohio were eating domestic pets, as grotesquely alleged by Republican presidential contender Donald Trump, and his vice presidential running mate, Ohio Senator J.D. Vance. Neither has produced evidence proving their claim, and local officials say it's untrue. Still, the false allegation is all over the internet. Experts have long worried about how rapidly conspiracy theories can spread, and some research suggests that people can't be persuaded by facts that contradict those beliefs. But a new study published today in Science offers hope that many people can and will abandon conspiracy theories under the right circumstances. In this case, researchers tested whether conversations with a chatbot powered by generative artificial intelligence could successfully engage with people who believed popular conspiracy theories, like that the Sept. 11 attacks were orchestrated by the American government and that the COVID-19 virus was a man-made attempt by "global elites" to "control the masses." The study's 2,190 participants had tailored back-and-forth conversations about a single conspiracy theory of their choice with OpenAI's GPT-4 Turbo. The model had been trained on a large amount of data from the internet and licensed sources. After the participants' discussions, the researchers found a 20 percent reduction in conspiracy belief. Put another way, a quarter of participants had stopped adhering to the conspiracy theory they'd discussed. That decrease persisted two months after their interaction with the chatbot. David Rand, a co-author of the study, said the findings indicate people's minds can be changed with facts, despite pessimism about that prospect. "Evidence isn't dead," Rand told Mashable. "Facts and evidence do matter to a substantial degree to a lot of people." Rand, who is a professor of management science and brain and cognitive sciences at MIT, and his co-authors didn't test whether the study participants were more likely to change their minds after talking to a chatbot versus someone they know in real life, like a best friend or sibling. But they suspect the chatbot's success has to do with how quickly it can marshal accurate facts and evidence in response. In a sample conversation included in the study, a participant who thinks that the Sept. 11 attacks were staged receives an exhaustive scientific explanation from the chatbot about how the Twin Towers collapsed without the aid of explosive detonations, among other popular related conspiracy claims. At the outset, the participant felt 100 percent confident in the conspiracy theory; by the end, their confidence dropped to 40 percent. For anyone who's ever tried to discuss a conspiracy theory with someone who believes it, they may have experienced rapid-fire exchanges filled with what Rand described as "weird esoteric facts and links" that are incredibly difficult to disprove. A generative AI chatbot, however, doesn't have that problem, because it can instantaneously respond with fact-based information. Nor is an AI chatbot hampered by personal relationship dynamics, such as whether a long-running sibling rivalry or dysfunctional friendship shapes how a conspiracy theorist views the person offering counter information. In general, the chatbot was trained to be polite to participants, building a rapport with them by validating their curiosity or confusion. The researchers also asked participants about their trust in artificial intelligence. They found that the more a participant trusted AI, the more likely they were to suspend their conspiracy theory belief in response to the conversation. But even those skeptical of AI were capable of changing their minds. Importantly, the researchers hired a professional fact-checker to evaluate the claims made by the chatbot, to ensure it wasn't sharing false information, or making things up. The fact-checker rated nearly all of them as true and none of them as false. For now, people who are curious about the researchers' work can try it out for themselves by using their DebunkBot, which allows users to test their beliefs against an AI. Rand and his co-authors imagine a future in which a chatbot might be connected to social media accounts as a way to counter conspiracy theories circulating on a platform. Or people might find a chatbot when they search online for information about viral rumors or hoaxes thanks to keyword ads tied to certain conspiracy search terms. Rand said the study's success, which he and his co-authors have replicated, offers an example of how AI can be used for good. Still, he's not naive about the potential for bad actors to use the technology to build a chatbot that confirms certain conspiracy theories. Imagine, for example, a chatbot that's been trained on social media posts that contain false claims. "It remains to be seen, essentially, how all of this shakes out," Rand said. "If people are mostly using these foundation models from companies that are putting a lot of effort into really trying to make them accurate, we have a reasonable shot at this becoming a tool that's widely useful and trusted."
[7]
This Chatbot Pulls People Away From Conspiracy Theories
In a new study, many people doubted or abandoned false beliefs after a short conversation with the DebunkBot. Shortly after generative artificial intelligence hit the mainstream, researchers warned that chatbots would create a dire problem: As disinformation became easier to create, conspiracy theories would spread rampantly. Now, researchers wonder if chatbots might also offer a solution. DebunkBot, an A.I. chatbot designed by researchers to "very effectively persuade" users to stop believing unfounded conspiracy theories, made significant and long-lasting progress at changing people's convictions, according to a study published on Thursday in the journal Science. Indeed, false theories are believed by up to half of the American public and can have damaging consequences, like discouraging vaccinations or fueling discrimination. The new findings challenge the widely held belief that facts and logic cannot combat conspiracy theories. The DebunkBot, built on the technology that underlies ChatGPT, may offer a practical way to channel facts. "The work does overturn a lot of how we thought about conspiracies," said Gordon Pennycook, a psychology professor at Cornell University and author of the study. Until now, conventional wisdom held that once someone fell down the conspiratorial rabbit hole, no amount of arguing or explaining would pull that person out.
[8]
Conversations with AI chatbots can reverse conspiracy beliefs - Earth.com
The saying goes, "Once a person is down the rabbit hole of conspiracy theories, they're lost for good," describes the typical consensus regarding conspiracy theorists. Well, that idea is being overturned by artificial intelligence (AI). An intriguing research study has demonstrated that even the staunchest conspiracy theorists can shift their views following brief interactions with AI chatbots. This particularly holds true for those who hold deep-seated beliefs in some of the most firmly held conspiracies, such as the 2020 U.S. presidential election fraud and the COVID-19 pandemic. Fueled by political polarization, social media, and widespread misinformation, conspiracy theories have grown into a significant public concern. They have managed to drive a wedge between theorists and their close friends and relatives. A YouGov survey conducted last December showed a large number of Americans succumbing to various baseless conspiracies. The study presents a strong challenge to the accepted view within psychology that conspiracy theorists stick to their beliefs because they hold value to their identities or resonate with their underlying motivations. Thomas Costello, assistant professor of psychology at American University and the lead author of this study, is of the view that most previous approaches have focused on preventing people from forming these beliefs, rather than rectifying them. "Many conspiracy believers were indeed willing to update their views when presented with compelling counterevidence. I was quite surprised at first, but reading through the conversations turned me into a believer," Costello enthused. "The AI managed to provide lengthy, highly detailed reasons debunking the claimed conspiracy in each round of conversation. It also proved adept at establishing cordial rapport with the participants." The study comes at a time when society is in the middle of a debate on the potential benefits and risks of AI and chatbots. Large language AI models serve as rich repositories of knowledge, and the authors of this research emphasize one way in particular that these vast databases can help people form more accurate beliefs. Artificial intelligence used in chatbots can gather data from diverse topics within seconds, making it possible to tailor counterarguments against specific conspiracy theories in ways that humans simply can't match. Gordon Pennycook, associate professor of psychology at Cornell University and co-author of the paper, echoes this sentiment. "Previous attempts to debunk dubious beliefs have a major limitation: One has to guess what people's actual beliefs are in order to debunk them -- not an easy task," Pennycook remarked. "In contrast, the AI is designed to respond directly to people's specific arguments using strong counterevidence. This offers a unique chance to assess just how responsive people can be to counterevidence." Chatbots are designed to be as persuasive as possible while engaging participants in tailored dialogues. GPT-4, the AI model powering ChatGPT, provided factual responses to participants' conspiratorial claims, which gives the models more credibility. The study's results are quite impressive, to say the least, offering a sense of optimism in the realm of psychology and cognitive science. Participants exhibited a notable shift in their beliefs after engaging with the AI, demonstrating a reduction in conviction towards their previously held conspiracy theories. This shift suggests that even the most entrenched beliefs may not be as immutable as once thought. The implications extend beyond academic circles. They prompt a re-evaluation of strategies used in public health campaigns, educational efforts, and policy-making aimed at combating misinformation. By harnessing AI's potential to engage individuals with tailored, evidence-based dialogues, society can make strides toward promoting more informed and rational public discourse. The researchers acknowledge that the study is just the beginning, and many avenues for future research remain. One area to explore is the long-term effects of AI-facilitated belief modification. While the study showed promising short-term results, understanding whether these changes persist over time is crucial. Additionally, future studies could investigate the effectiveness of AI in addressing conspiracy theories across different cultural and linguistic contexts, as well as its potential adaptability to various mediums, such as social media platforms or educational tools. Expanding the scope of AI's application may reveal broader opportunities for its integration into societal efforts aimed at fostering critical thinking and evidence-based reasoning. To sum it all up, these results presented by Thomas Costello and his team are encouraging and suggest a future where AI could play a role in helping reduce conspiracy beliefs when used responsibly. Nonetheless, further studies will be required to determine long-term effects, using different AI models, and practical applications outside of a lab setting. David Rand, a paper co-author and professor at the MIT Sloan School of Management, is optimistic about this potential. "Although much ink has been spilled over the potential for generative AI to supercharge disinformation, our study shows that it can also be part of the solution," said Rand. "Large language models like GPT4 have the potential to counter conspiracies on a massive scale." -- - For those interested in engaging with this ongoing work, there is a website available where the public can try out the intervention for themselves. Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
[9]
Can AI talk us out of conspiracy theory rabbit holes?
RMIT University provides funding as a strategic partner of The Conversation AU. New research published in Science shows that for some people who believe in conspiracy theories, a fact-based conversation with an artificial intelligence (AI) chatbot can "pull them out of the rabbit hole". Better yet, it seems to keep them out for at least two months. This research, carried out by Thomas Costello at the Massachusetts Institute of Technology and colleagues, shows promise for a challenging social problem: belief in conspiracy theories. Some conspiracy theories are relatively harmless, such as believing Finland doesn't exist (which is fine, until you meet a Finn). Other theories, though, reduce trust in public institutions and science. This becomes a problem when conspiracy theories persuade people not to get vaccinated or not to take action against climate change. At its most extreme, belief in conspiracy theories has been associated with people dying. Conspiracy theories are 'sticky' Despite the negative impacts of conspiracy theories, they have proven very "sticky". Once people believe in a conspiracy theory, changing their mind is hard. The reasons for this are complex. Conspiracy theorist beliefs are associated with communities, and conspiracy theorists have often done extensive research to reach their position. When a person no longer trusts science or anyone outside their community, it's hard to change their beliefs. Enter AI The explosion of generative AI into the public sphere has increased concerns about people believing in things that aren't true. AI makes it very easy to create believable fake content. Even if used in good faith, AI systems can get facts wrong. (ChatGPT and other chatbots even warn users that they might be wrong about some topics.) AI systems also contain widespread biases, meaning they can promote negative beliefs about some groups of people. Given all this, it's quite surprising that a chat with a system known to produce fake news can convince some people to abandon conspiracy theories, and that the change seems to be long lasting. However, this new research leaves us with a good-news/bad-news problem. It's great we've identified something that has some effect on conspiracy theorist beliefs! But if AI chatbots are good at talking people out of sticky, anti-scientific beliefs, what does that mean for true beliefs? What can the chatbots do? Let's dig into the new research in more detail. The researchers were interested to know whether factual arguments could be used to persuade people against conspiracy theorist beliefs. This research used over 2,000 participants across two studies, all chatting with an AI chatbot after describing a conspiracy theory they believed. All participants were told they were talking to an AI chatbot. The people in the "treatment" group (60% of all participants) conversed with a chatbot that was personalised to their particular conspiracy theory, and the reasons why they believed in it. This chatbot tried to convince these participants that their beliefs were wrong using factual arguments over three rounds of conversation (the participant and the chatbot each taking a turn to talk is a round). The other half of participants had a general discussion with a chatbot. The researchers found that about 20% of participants in the treatment group showed a reduced belief in conspiracy theories after their discussion. When the researchers checked in with participants two months later, most of these people still showed reduced belief in conspiracy theories. The scientists even checked whether the AI chatbots were accurate, and they (mostly) were. We can see that for some people at least, a three-round conversation with a chatbot can persuade them against a conspiracy theory. So we can fix things with chatbots? Chatbots do offer some promise with two of the challenges in addressing false beliefs. Because they are computers, they are not perceived as having an "agenda", making what they say more trustworthy (especially to someone who has lost faith in public institutions). Chatbots can also put together an argument, which is better than facts alone. A simple recitation of facts is only minimally effective against fake beliefs. Chatbots aren't a cure-all though. This study showed they were more effective for people who didn't have strong personal reasons for believing in a conspiracy theory, meaning they probably won't help people for whom conspiracy is community. So should I use ChatGPT to check my facts? This study demonstrates how persuasive chatbots can be. This is great when they are primed to convince people of facts, but what if they aren't? One major way chatbots can promote misinformation or conspiracies is when their underlying data is wrong or biased: the chatbot will reflect this. Some chatbots are designed to deliberately reflect biases or increase or limit transparency. You can even chat to versions of ChatGPT customised to argue that Earth is flat. A second, more worrying probability, is that as chatbots respond to biased prompts (that searchers may not realise are biased), they may perpetuate misinformation (including conspiracy beliefs). We already know that people are bad at fact checking and when they use search engines to do so, those search engines respond to their (unwittingly biased) search terms, reinforcing beliefs in misinformation. Chatbots are likely to be the same. Ultimately, chatbots are a tool. They may be helpful in debunking conspiracy theories - but like any tool, the skill and intention of the toolmaker and user matter. Conspiracy theories start with people, and it will be people that end them.
[10]
Can AI talk us out of conspiracy theory rabbit holes?
Some conspiracy theories are relatively harmless, such as believing Finland doesn't exist (which is fine, until you meet a Finn). Other theories, though, reduce trust in public institutions and science. This becomes a problem when conspiracy theories persuade people not to get vaccinated or not to take action against climate change. At its most extreme, belief in conspiracy theories has been associated with people dying. Conspiracy theories are 'sticky' Despite the negative impacts of conspiracy theories, they have proven very "sticky". Once people believe in a conspiracy theory, changing their mind is hard. The reasons for this are complex. Conspiracy theorist beliefs are associated with communities, and conspiracy theorists have often done extensive research to reach their position. When a person no longer trusts science or anyone outside their community, it's hard to change their beliefs. Enter AI The explosion of generative AI into the public sphere has increased concerns about people believing in things that aren't true. AI makes it very easy to create believable fake content. Even if used in good faith, AI systems can get facts wrong. (ChatGPT and other chatbots even warn users that they might be wrong about some topics.) AI systems also contain widespread biases, meaning they can promote negative beliefs about some groups of people. Given all this, it's quite surprising that a chat with a system known to produce fake news can convince some people to abandon conspiracy theories, and that the change seems to be long lasting. However, this new research leaves us with a good-news/bad-news problem. It's great we've identified something that has some effect on conspiracy theorist beliefs! But if AI chatbots are good at talking people out of sticky, anti-scientific beliefs, what does that mean for true beliefs? What can the chatbots do? Let's dig into the new research in more detail. The researchers were interested to know whether factual arguments could be used to persuade people against conspiracy theorist beliefs. This research used over 2,000 participants across two studies, all chatting with an AI chatbot after describing a conspiracy theory they believed. All participants were told they were talking to an AI chatbot. The people in the "treatment" group (60 per cent of all participants) conversed with a chatbot that was personalised to their particular conspiracy theory, and the reasons why they believed in it. This chatbot tried to convince these participants that their beliefs were wrong using factual arguments over three rounds of conversation (the participant and the chatbot each taking a turn to talk is a round). The other half of participants had a general discussion with a chatbot. The researchers found that about 20 per cent of participants in the treatment group showed a reduced belief in conspiracy theories after their discussion. When the researchers checked in with participants two months later, most of these people still showed reduced belief in conspiracy theories. The scientists even checked whether the AI chatbots were accurate, and they (mostly) were. We can see that for some people at least, a three-round conversation with a chatbot can persuade them against a conspiracy theory. So we can fix things with chatbots? Chatbots do offer some promise with two of the challenges in addressing false beliefs. Because they are computers, they are not perceived as having an "agenda", making what they say more trustworthy (especially to someone who has lost faith in public institutions). Chatbots can also put together an argument, which is better than facts alone. A simple recitation of facts is only minimally effective against fake beliefs. Chatbots aren't a cure-all though. This study showed they were more effective for people who didn't have strong personal reasons for believing in a conspiracy theory, meaning they probably won't help people for whom conspiracy is community. So should I use ChatGPT to check my facts? This study demonstrates how persuasive chatbots can be. This is great when they are primed to convince people of facts, but what if they aren't? One major way chatbots can promote misinformation or conspiracies is when their underlying data is wrong or biased: the chatbot will reflect this. Some chatbots are designed to deliberately reflect biases or increase or limit transparency. You can even chat to versions of ChatGPT customised to argue that Earth is flat. A second, more worrying probability, is that as chatbots respond to biased prompts (that searchers may not realise are biased), they may perpetuate misinformation (including conspiracy beliefs). We already know that people are bad at fact checking and when they use search engines to do so, those search engines respond to their (unwittingly biased) search terms, reinforcing beliefs in misinformation. Chatbots are likely to be the same. Ultimately, chatbots are a tool. They may be helpful in debunking conspiracy theories - but like any tool, the skill and intention of the toolmaker and user matter. Conspiracy theories start with people, and it will be people that end them. (The Conversation) PY PY
[11]
Chats with AI bots found to damp conspiracy theory beliefs
Conspiracy theorists who debated with an artificial intelligence chatbot became more willing to admit doubts about their beliefs, according to research that offers insights into dealing with misinformation. The greater open-mindedness extended even to the most stubborn devotees and persisted long after the dialogue with the machine ended, scientists found. The research runs counter to the idea that it is all but impossible to change the mind of individuals who have dived down rabbit holes of popular but unevidenced ideas. The findings are striking because they suggest a potential positive role for AI models in countering misinformation, despite their own vulnerabilities to "hallucinations" that sometimes cause them to spread falsehoods. The work "paints a brighter picture of the human mind than many might have expected" and shows that "reasoning and evidence are not dead", said David Rand, one of the researchers on the work published in Science on Thursday. "Even many conspiracy theorists will respond to accurate facts and evidence -- you just have to directly address their specific beliefs and concerns," said Rand, a professor at the Massachusetts Institute of Technology's Sloan School of Management. "While there are widespread legitimate concerns about the power of generative AI to spread disinformation, our paper shows how it can also be part of the solution by being a highly effective educator," he added. The researchers examined whether AI large language models such as OpenAI's GPT-4 Turbo could use their ability to access and summarise information to address persistent conspiratorial beliefs. These included that the September 11 2001 terrorist attacks were staged, the 2020 US presidential election fraudulent and the Covid-19 pandemic orchestrated. Almost 2,200 participants shared conspiratorial ideas with the LLM, which generated evidence to counter the claims. These dialogues cut the person's self-rated belief in their chosen theory by an average of 20 per cent for at least two months after talking to the bot, the researchers said. A professional fact-checker assessed a sample of the model's own output for accuracy. The verification found 99.2 per cent of the LLM's claims to be true and 0.8 per cent misleading, the scientists said. The study's personalised question-and-answer approach is a response to the apparent ineffectiveness of many existing strategies to debunk misinformation. Another complication with generalised efforts to target conspiratorial thinking is that actual conspiracies do happen, while in other cases sceptical narratives may be highly embellished but based on a kernel of truth. One theory about why the chatbot interaction appears to work well is that it has instant access to any type of information, in a way that a human respondent does not. The machine also dealt with its human interlocutors in polite and empathetic terms, in contrast to the scorn sometimes heaped on conspiracy theorists in real life. Other research, however, suggested the machine's mode of address was probably not an important factor, Rand said. He and his colleagues had done a follow-up experiment in which the AI was prompted to give factual correction "without the niceties" and it worked just as well, he added. The study's "size, robustness, and persistence of the reduction in conspiracy beliefs" suggested a "scalable intervention to recalibrate misinformed beliefs may be within reach", according to an accompanying commentary also published in Science. But possible limitations included difficulties in responding to new conspiracy theories and in coaxing people with low trust in scientific institutions to interact with the bot, said Bence Bago from the Netherlands' Tilburg University and Jean-François Bonnefon of the Toulouse School of Economics, who authored the secondary paper together. "The AI dialogue technique is so powerful because it automates the generation of specific and thorough counter-evidence to the intricate arguments of conspiracy believers and therefore could be deployed to provide accurate, corrective information at scale," said Bago and Bonnefon, who were not involved in the research. "An important limitation to realising this potential lies in delivery," they added. "Namely, how to get individuals with entrenched conspiracy beliefs to engage with a properly trained AI program to begin with."
[12]
Can AI Talk People Out of Conspiracy Theories?
When presented by a chatbot, believers were more open to the facts Facts don't matter to people who believe in debunked conspiracy theories -- at least that's the belief. But this theory itself might prove untrue, according to psychology researchers. Evidence delivered by an AI chatbot convinced a significant number of participants in a study to put less faith in a conspiracy theory they previously said was true, according to a study published today in Science. Researchers at MIT and Cornell University, led by Thomas Costello, an assistant professor of psychology at American University in Washington, D.C., concluded that chatbots excelled at delivering information that debunked the specific reasons participants believed in conspiracy theories. "Many people who strongly believe in seemingly fact-resistant conspiratorial beliefs can change their minds when presented with compelling evidence," the study's authors wrote. Current psychological research posits that conspiracy theorists resist facts because their beliefs serve some internal need, such as belonging to a group, maintaining a sense of control over their circumstances or feeling special. The researchers started with the hypothesis that conspiracy theorists could be swayed from their positions with clear, specific facts to refute the erroneous evidence their participants cited. While many people may believe in a given conspiracy theory, the researchers said, the evidence they rely on varies among individuals. "People have different versions of the conspiracy in their head," Costello said in a press briefing. To measure the chatbot's effectiveness, the researchers sought out participants who endorsed theories including the belief that the 11 September, 2001 attacks were an inside job and that certain governments have funneled illegal drugs into ethnic minority communities. They defined a conspiracy theory as a belief that certain events were "caused by secret, malevolent plots involving powerful conspirators." The chatbot reduced participants' confidence in a conspiracy theory by an average of 20 percent, as rated on a scale of 0 percent to 100 percent by the participants themselves before and after the conversations. In follow-up queries, the change in beliefs persisted at 10 days and again at 2 months. The chatbot was powered by GPT-4 Turbo, a large-language model from OpenAI that gave it a wide range of information to use in response to the participants' remarks. Participants were told the study was investigating conversations about controversial topics between AI and humans. The chatbot wasn't prompted by researchers to refute true conspiracies. For example, the chatbot wouldn't discredit the well-documented MKUltra program, in which the CIA tested drugs on human subjects in the mid-20th century. Fact checkers reviewed the evidence given by the chatbot and found it was accurate 99.2 percent of the time, and the other 0.8 percent of claims were misleading. They didn't find any claims to be false or biased. In one example presented in the paper, a participant explained to the chatbot why they believed the 11 Sept. attacks were planned by the U.S. government. At the start of the conversation, they said they were 100 percent confident in this theory. The chatbot requested more information about the evidence the participant found convincing, and then responded by summarizing the research that disproved these erroneous or misconstrued facts. "Steel does not need to melt to lose its structural integrity," the chatbot said while drawing on an investigation from the National Institute of Standards and Technology to correct the participant's reliance on the misleading fact that jet fuel doesn't burn hot enough to melt a buildings' steel girders, adding, "It begins to weaken much earlier." Psychologists have theorized that when people form part of their identities around a conspiracy theory, they are more likely to ignore or reject information that could debunk their beliefs. But if chatbots can move the needle with facts, humans may simply not be skilled enough in presenting the right evidence, the researchers said. Instead of being reluctant to discuss these topics with non-believers, conspiracy theorists are often eager to go over the evidence they say supports their positions. In fact, the amount of information they've absorbed, while incorrect or misleading, "can leave skeptics outmatched in debates and arguments," the researchers wrote. Sander van der Linden, a professor of social psychology in society at the University of Cambridge who was not involved in the study, said the results were impressive, especially the amount of time the reduction in belief persisted. He also said several questions are left to explore. For one, while the study relied on a control group of participants who had a neutral conversation with the AI chatbot, another approach would have been to have humans try to convince a separate group of conspiracy theorists. This would have made it more definitive that the chatbot was the reason people responded the way they did. There's also the psychological impact of talking to a chatbot instead of another person. It's not known whether participants felt less judged or more trusting of the chatbot as a source, for example. It's possible that the chatbots helped participants on an emotional level in addition to a factual one, van der Linden says. "It's important to avoid a false dichotomy," says van der Linden. "I suspect both needs and evidence play a role in persuading conspiracy believers." The researchers acknowledged the open questions at the press briefing and said they've already begun to explore some of them. Upcoming research will look at whether it's necessary for the chatbot to be polite and build rapport with statements like, "Thank you for sharing your thoughts and concerns."
[13]
AI chatbot can reduce conspiracy theorists' beliefs, study finds
Researchers highlighted the possibility for AI to refute each person's specific arguments and to generate personalised content. A new study has found that it may be possible to reduce a person's belief in conspiracy theories using ChatGPT. Researchers from American University, the Massachusetts Institute of Technology (MIT) and Cornell University in the US used OpenAI's most advanced artificial intelligence (AI) chatbot, GPT-4 Turbo, to engage with people who believe in conspiracies. Chatting with the latest version of ChatGPT reduced the study participants' belief in a conspiracy theory by 20 per cent on average and lasted for at least two months. The study published on Thursday in the journal Science involved more than 2,100 self-identified American conspiracy believers. "Many conspiracy believers were indeed willing to update their views when presented with compelling counterevidence," Thomas Costello, assistant professor of psychology at American University and the study's lead author, said in a statement. Researchers highlighted the possibility for the AI chatbot to refute each person's specific arguments with personalised generated content. The AI was instructed to "very effectively persuade" users against the conspiracy they believed in, according to the paper. "I was quite surprised at first, but reading through the conversations made [me much] less sceptical. The AI provided page-long, highly detailed accounts of why the given conspiracy was false in each round of conversation and was also adept at being amiable and building rapport with the participants," Costello added. The participants were surveyed and indicated through a score how strong their belief was before the experiment and were warned that they would be interacting with an AI. The conspiracy theories ranged from the ones related to the assassination of former US president John F. Kennedy, aliens, and the Illuminati to ones linked to COVID-19 or the 2020 US presidential election. In fewer than 10 minutes of interaction with an AI, researchers observed a 20 per cent decrease in an average participant's belief in a conspiracy theory, and roughly 27 per cent of the participants became "uncertain" of their conspiracy belief. Robbie Sutton, a professor of social psychology at the University of Kent in the UK, described this reduction as "significant". "These effects seem less strong, it has to be said, than those shown by some studies of other debunking and prebunking interventions," Sutton, who wasn't part of the study, said in an email. "However, their main importance lies in the nature of the intervention. Because generative AI is of course automated, the intervention can be scaled up to reach many people, and targeted to reach, at least in theory, those who would benefit from it most," he added. In addition, it's also important to note that the experiment took place in a controlled setting making it challenging to reproduce on a larger scale, both the researchers and Sutton noted. "Prebunking and especially debunking interventions are carefully designed and tested in conditions that are profoundly unrealistic," Sutton said, comparing the participants to "essentially a captive audience" that rarely chooses to leave once recruited into a study.
[14]
Chatbots can chip away at belief in conspiracy theories
Why it matters: Belief in conspiracy theories can have dangerous consequences for health choices, deepen political divides and fracture families and friendships. What they found: Conversing with a chatbot about a conspiracy theory can reduce a person's belief in that theory by about 20% on average, researchers report in a new study. How it works: More than 2,100 participants were asked to tell an AI system called DebunkBot -- running on GPT-4 -- about a conspiracy theory they found credible or compelling and why, and then present evidence they think supports it. What they're saying: While chatbots are known for spouting hallucinations and inaccuracies, the study suggests "GPT can stand up for truth that people don't think that LLMs can stand up for," says Kurt Gray, a professor of psychology and neuroscience at the University of North Carolina at Chapel Hill who wasn't involved in the study. Yes, but: It's unclear how practical the intervention is. The bottom line: Sophisticated chatbots have a dual potential.
[15]
The Download: conspiracy-debunking chatbots, and fact-checking AI
Chatbots can persuade people to stop believing in conspiracy theories The internet has made it easier than ever before to encounter and spread conspiracy theories. And while some are harmless, others can be deeply damaging, sowing discord and even leading to unnecessary deaths. Now, researchers believe they've uncovered a new tool for combating false conspiracy theories: AI chatbots. Researchers from MIT Sloan and Cornell University found that chatting about a conspiracy theory with a large language model (LLM) reduced people's belief in it by about 20% -- even among participants who claimed that their beliefs were important to their identity The findings could represent an important step forward in how we engage with and educate people who espouse baseless theories. Read the full story. Google's new tool lets large language models fact-check their responses The news: Google is releasing a tool called DataGemma that it hopes will help to reduce problems caused by AI 'hallucinating', or making incorrect claims. It uses two methods to help large language models fact-check their responses against reliable data and cite their sources more transparently to users. What next: If it works as hoped, it could be a real boon for Google's plan to embed AI deeper into its search engine. But it comes with a host of caveats. Read the full story.
[16]
AI Conversations Help Conspiracy Theorists Change Their Views - Neuroscience News
Summary: AI-powered conversations can reduce belief in conspiracy theories by 20%. Researchers found that AI provided tailored, fact-based rebuttals to participants' conspiracy claims, leading to a lasting change in their beliefs. In one out of four cases, participants disavowed the conspiracy entirely. The study suggests that AI has the potential to combat misinformation by engaging people directly and personally. 'They're so far down the rabbit hole of conspiracy theories that they're lost for good' is common thinking when it comes to conspiracy theorists. This generally accepted notion is now crumbling. In a pathbreaking research study, a team of researchers from American University, Massachusetts Institute of Technology and Cornell University show that conspiracy theorists changed their views after short conversations with artificial intelligence. Study participants believing some of the most deeply entrenched conspiracies, including those about the COVID-19 pandemic and fraud in the 2020 U.S. presidential election, showed large and lasting reductions in conspiracy belief following the conversations. Stoked by polarization in politics and fed by misinformation and social media, conspiracy theories are a major issue of public concern. They often serve as a wedge between theorists and their friends and family members. YouGov survey results from last December show that large shares of Americans believe various conspiratorial falsehoods. In the field of psychology, the widespread view the findings challenge is that conspiracy theorists adhere to their beliefs because of the significance to their identities, and because the beliefs resonate with underlying drives and motivations, says Thomas Costello, assistant professor of psychology at American University and lead author of the new study published in the journal Science. In fact, most approaches have focused on preventing people from believing conspiracies in the first place. "Many conspiracy believers were indeed willing to update their views when presented with compelling counterevidence," Costello said. "I was quite surprised at first, but reading through the conversations made much me less skeptical. The AI provided page-long, highly detailed accounts of why the given conspiracy was false in each round of conversation -- and was also adept at being amiable and building rapport with the participants." More than 2,000 self-identified conspiracy believers participated in the study. The AI conversations reduced the average participant's belief in their chosen conspiracy theory by about 20 percent, and about 1 in 4 participants -- all of whom believed the conspiracy beforehand -- disavowed the conspiracy after the conversation. Until now, delivering persuasive, factual messages to a large sample of conspiracy theorists in a lab experiment has proved challenging. For one, conspiracy theorists are often highly knowledgeable about the conspiracy -- often more so than skeptics. Conspiracies also vary widely, such that evidence backing a particular theory can differ from one believer to another. AI as an intervention The new study comes as society debates the promise and peril of AI. Large language models driving generative AI are powerful reservoirs of knowledge. Researchers emphasize that the study demonstrates one way that these reservoirs of knowledge can be used for good: by helping people have more accurate beliefs. The ability of artificial intelligence to connect across diverse topics of information within seconds makes it possible to tailor counterarguments to specific conspiracies of a believer in ways that aren't possible for a human to do. "Previous efforts to debunk dubious beliefs have a major limitation: One needs to guess what people's actual beliefs are in order to debunk them - not a simple task," said Gordon Pennycook, associate professor of psychology at Cornell University and a paper co-author. "In contrast, the AI can respond directly to people's specific arguments using strong counterevidence. This provides a unique opportunity to test just how responsive people are to counterevidence." Researchers designed the chatbot to be highly persuasive and engage participants in such tailored dialogues. GPT-4, the AI model powering ChatGPT, provided factual rebuttals to participants' conspiratorial claims. In two separate experiments, participants were asked to describe a conspiracy theory they believe in and provide evidence to support. Participants then engaged in a conversation with an AI. The AI's goal was to challenge beliefs by addressing specific evidence. In a control group, participants discussed an unrelated topic with the AI. To tailor the conversations, researchers provided the AI with participants' initial statement of belief and the rationale. This setup allowed for a more natural dialogue, with the AI directly addressing a participant's claims. The conversation averaged 8.4 of the participant's minutes and involved three rounds of interaction, excluding the initial setup. Ultimately, both experiments showed a reduction in participants' beliefs in conspiracy theories. When the researchers assessed participants two months later, they found that the effect persisted. While the results are promising and suggest a future in which AI can play a role in diminishing conspiracy belief when used responsibly, further studies on long-term effects, using different AI models, and practical applications outside of a laboratory setting will be needed. "Although much ink has been spilled over the potential for generative AI to supercharge disinformation, our study shows that it can also be part of the solution," said David Rand, a paper co-author and MIT Sloan School of Management professor. "Large language models like GPT4 have the potential to counter conspiracies at a massive scale." Additionally, members of the public interested in this ongoing work can visit a website and try out the intervention for themselves.
[17]
Study: Conversations with AI chatbots can reduce belief in conspiracy theories
Co-author Gordon Pennycook: "The work overturns a lot of how we thought about conspiracies." Belief in conspiracy theories is rampant, particularly in the US, where some estimates suggest as much as 50 percent of the population believes in at least one outlandish claim. And those beliefs are notoriously difficult to debunk. Challenge a committed conspiracy theorist with facts and evidence, and they'll usually just double down -- a phenomenon psychologists usually attribute to motivated reasoning, i.e., a biased way of processing information. A new paper published in the journal Science is challenging that conventional wisdom, however. Experiments in which an AI chatbot engaged in conversations with people who believed at least one conspiracy theory showed that the interaction significantly reduced the strength of those beliefs, even two months later. The secret to its success: the chatbot, with its access to vast amounts of information across an enormous range of topics, could precisely tailor its counterarguments to each individual. "These are some of the most fascinating results I've ever seen," co-author Gordon Pennycook, a psychologist at Cornell University, said during a media briefing. "The work overturns a lot of how we thought about conspiracies, that they're the result of various psychological motives and needs. [Participants] were remarkably responsive to evidence. There's been a lot of ink spilled about being in a post-truth world. It's really validating to know that evidence does matter. We can act in a more adaptive way using this new technology to get good evidence in front of people that is specifically relevant to what they think, so it's a much more powerful approach." When confronted with facts that challenge a deeply entrenched belief, people will often seek to preserve it rather than update their priors (in Bayesian-speak) in light of the new evidence. So there has been a good deal of pessimism lately about ever reaching those who have plunged deep down the rabbit hole of conspiracy theories, which are notoriously persistent and "pose a serious threat to democratic societies," per the authors. Pennycook and his fellow co-authors devised an alternative explanation for that stubborn persistence of belief. Bespoke counter-arguments The issue is that "conspiracy theories just vary a lot from person to person," said co-author Thomas Costello, a psychologist at American University who is also affiliated with MIT. "They're quite heterogeneous. People believe a wide range of them and the specific evidence that people use to support even a single conspiracy may differ from one person to another. So debunking attempts where you try to argue broadly against a conspiracy theory are not going to be effective because people have different versions of that conspiracy in their heads." By contrast, an AI chatbot would be able to tailor debunking efforts to those different versions of a conspiracy. So in theory a chatbot might prove more effective in swaying someone from their pet conspiracy theory. To test their hypothesis, the team conducted a series of experiments with 2,190 participants who believed in one or more conspiracy theories. The participants engaged in several personal "conversations" with a large language model (GT-4 Turbo) in which they shared their pet conspiracy theory and the evidence they felt supported that belief. The LLM would respond by offering factual and evidence-based counter-arguments tailored to the individual participant. GPT-4 Turbo's responses were professionally fact-checked, which showed that 99.2 percent of the claims it made were true, with just 0.8 percent being labeled misleading, and zero as false. (You can try your hand at interacting with the debunking chatbot here.) Participants first answered a series of open-ended questions about the conspiracy theories they strongly believed and the evidence they relied upon to support those beliefs. The AI then produced a single-sentence summary of each belief, for example, "9/11 was an inside job because X, Y, and Z." Participants would rate the accuracy of that statement in terms of their own beliefs and then filled out a questionnaire about other conspiracies, their attitude toward trusted experts, AI, other people in society, and so forth. Then it was time for the one-on-one dialogues with the chatbot, which the team programmed to be as persuasive as possible. The chatbot had also been fed the open-ended responses of the participants, which made it better to tailor its counter-arguments individually. For example, if someone thought 9/11 was an inside job and cited as evidence the fact that jet fuel doesn't burn hot enough to melt steel, the chatbot might counter with, say, the NIST report showing that steel loses its strength at much lower temperatures, sufficient to weaken the towers' structures so that it collapsed. Someone who thought 9/11 was an inside job and cited demolitions as evidence would get a different response tailored to that. Participants then answered the same set of questions after their dialogues with the chatbot, which lasted about eight minutes on average. Costello et al. found that these targeted dialogues resulted in a 20 percent decrease in the participants' misinformed beliefs -- a reduction that persisted even two months later when participants were evaluated again. As Bence Bago (Tilburg University) and Jean-Francois Bonnefon (CNRS, Toulouse, France) noted in an accompanying perspective, this is a substantial effect compared to the 1 to 6 percent drop in beliefs achieved by other interventions. They also deemed the persistence of the effect noteworthy, while cautioning that two months is "insufficient to completely eliminate misinformed conspiracy beliefs."
[18]
Can AI talk us out of conspiracy theory rabbit holes? - Times of India
MIT's new research reveals that AI chatbots can effectively reduce belief in conspiracy theories. In a study with over 2,000 participants, around 20% showed diminished belief after factual discussions with chatbots. This effect lasted for at least two months, suggesting a promising approach to combat misinformation and restore trust in public institutions.
[19]
How an AI 'debunkbot' can change a conspiracy theorist's mind
'I must admit this really shifted my imagination when it comes to the subject of Illuminati.' In 2024, online conspiracy theories can feel almost impossible to avoid. Podcasters, prominent public figures, and leading political figures have breathed oxygen into once fringe ideas of collusion and deception. People are listening. Nationwide, nearly half of adults surveyed by the polling firm YouGov said they believe there is a secret group of people that control world events. Nearly a third (29%) believe voting machines were manipulated to alter votes in the 2020 presidential election. A surprising amount of Americans think the Earth is flat. Anyone who's spent time trying to refute those claims to a true believer knows how challenging of a task that can be. But what if a ChatGPT-like large language model could do some of that headache-inducing heavy lifting? A group of researchers from the Massachusetts Institute of Technology, Cornell, and American University put that idea to the test with a custom made chatbot they are now calling "debunkbot." The researchers, who published their findings in Science, had self-described conspiracy theorists engage in a back-and-forth conversation with a chatbot, which was instructed to produce detailed counter arguments to refute their position and ultimately try to change their minds. In the end, conversations with the chatbot reduced the participant's overall confidence in their professed conspiracy theory by an average of 20%. Around a quarter of the participants disavowed their conspiracy theory entirely after speaking with the AI. "We see that the AI overwhelmingly was providing non-con conspiratorial explanations for these seemingly conspiratorial events and encouraging people to engage in critical thinking and providing counter evidence," MIT professor and paper co-author David Rand said during a press briefing. "This is really exciting," he added. "It seemed like it worked and it worked quite broadly." The experiment involved 2,190 US adults who openly claimed they believed in at least one idea that meets the general description of a conspiracy theory. Participants ran the conspiracy and ideological gambit, with some expressing support for older classic theories involving President John F. Kennedy's assassination and the alien abductions to more modern claims about Covid-19 and the 2020 election. Each participant was asked to rate how strongly they believed in one particular theory on a scale of 0-100%. They were then asked to provide several reasons or explanations, in writing, for why they believed that theory. Those responses were then fed into the debunkbot, which is a customized version of OpenAI's GPT Turbo model. The researchers fine-tuned the bot to address each piece of "evidence" provided by the conspiracy theorist and respond to it with precise counterarguments pulled from its training data. Researchers say debunkbot was instructed to "very effectively persuade" users against their beliefs while also maintaining a respectful and patent tone. After three rounds of black and forth with the AI, the respondents were once again asked to provide a rating on how strongly they believed their stated conspiracy theory. Overall ratings supporting conspiracy beliefs decreased by 16.8 points on average following the back and forth. Nearly a third of the respondents left the exchange saying they were no longer certain of the belief they had going in. Those shifts in belief largely persisted even when researchers checked back in with the participants two months later. In instances where participants expressed belief in a "true" conspiracy theory -- such as efforts by the tobacco industry to hook kids or the CIA's clandestine MKUltra mind control experiments -- the AI actually validated the beliefs and provided more evidence to buttress them. Some of the respondents who shifted their beliefs after the dialogue thanked the chatbot for helping them see the other side. "Now this is the very first time I have gotten a response that made real, logical, sense," one of the participants said following the experiment. "I must admit this really shifted my imagination when it comes to the subject of Illuminati." "Our findings fundamentally challenge the view that evidence and arguments are of little use once someone has 'gone down the rabbit hole' and come to believe a conspiracy theory," the researchers said. The researchers believe the chatbot's apparent success lies in its ability to access stores of targeted, detailed, factual data points quickly. In theory, a human could perform this same process, but they would be at a disadvantage. Conspiracy theorists may often obsess over their issue of choice which means they may "know" many more details about it than a skeptic trying to counter their claims. As a result, human debunkers can get lost trying to refute various obscure arguments. That can require a level of memory and patience well suited to an AI. "It's really validating to know that evidence does matter," Cornell University Professor and paper coauthor Gordon Pennycook said during a briefing. "Before we had this sort of technology, it was not straightforward to know exactly what we needed to debunk. We can act in a more adaptive way using this new technology." Popular Science tested the findings with a version of the chatbot provided by the researchers. In our example, we told the AI we believed the 1969 moon landing was a hoax. To support our argument, we parroted three talking points common among moon landing skeptics. We asked why the photographed flag seemed to be flowing in the wind when there is no atmosphere on the moon, how astronauts could have survived passing through the highly irradiated Van Allen belts without being harmed, and why the US hasn't placed another person on the moon despite advances in technology. Within three seconds the chatbot provided a paragraph clearly refuting each of those points. When I annoyingly followed up by asking the AI how it could trust figures provided by corrupt government sources, another common refrain among conspiracy theorists, the chatbot patiently responded by acknowledging my concerns and pointed me to additional data points. It's unclear if even the most adept human debunker could maintain their composure when repeatedly pressed with strawman arguments and unfalsifiable claims. AI chatbots aren't perfect. Numerous studies and real-world examples show some of the most popular AI tools released by Google and OpenAI repeatedly fabricating or "hallucinating" facts and figures. In this case, the researchers hired a professional fact checker to validate the various claims the chatbot made while conversing with the study participants. The fact-checker didn't check all of AI's thousands of responses. Instead they looked over 128 claims spread out across a representative sample of the conversations. 99.2% of those AI claims were deemed true and .8% were considered misleading. None were considered outright falsehoods by the fact-checker. "We don't want to run the risk of letting the perfect get in the way of the good," Pennycock said. "Clearly, it [the AI model] is providing a lot of really high quality evidence in these conversations. There might be some cases where it's not high quality, but overall it's better to get the information than to not." Looking forward, the researchers are hopeful their debunkbot or something like it could be used in the real world to meet conspiracy theorists where they are and, maybe, make them reconsider their beliefs. The researchers proposed potentially having a version of the bot appear in Reddit forums popular among conspiracy theorists. Alternatively, researchers could potentially run Google ads on search terms common amongst conspiracy theorists. In that case, rather than get what they were looking for, the user could be directed to the chatbot. The researchers say they are also interested in collaborating with large tech platforms such as Meta to think of ways to surface these chabots on platforms. Whether or not people would willingly agree to take time out of their day to argue with robots outside of an experiment, however, remains far from certain. Still, the paper authors say the findings underscore a more fundamental point: facts and reason, when delivered properly can pull some people out of their conspiratorial rabbit holes. "Arguments and evidence should not be abandoned by those seeking to reduce belief in dubious conspiracy theories," the researchers wrote. "Psychological needs and motivations do not inherently blind conspiracists to evidence. It simply takes the right evidence to reach them." That is, of course, if you're persistent and patient enough.
[20]
AI can change belief in conspiracy theories, study finds
Research challenges conventional wisdom that evidence and arguments rarely help to change believers' minds Whether it is the mistaken idea that the moon landings never happened or the false claim that Covid jabs contain microchips, conspiracy theories abound, sometimes with dangerous consequences. Now researchers have found that such beliefs can be altered by a chat with artificial intelligence (AI). "Conventional wisdom will tell you that people who believe in conspiracy theories rarely, if ever, change their mind, especially according to evidence," said Dr Thomas Costello, a co-author of the study from American University. That, he added, is thought to be down to people adopting such beliefs to meet various needs - such as a desire for control. However, the new study offers a different stance. "Our findings fundamentally challenge the view that evidence and arguments are of little use once someone has 'gone down the rabbit hole' and come to believe a conspiracy theory," the team write. Crucially, the researchers say the approach relies on an AI system that can draw on a vast array of information to produce conversations that encourage critical thinking and provide bespoke, fact-based counterarguments. "The AI knew in advance what the person believed and, because of that, it was able to tailor its persuasion to their precise belief system," said Costello. Writing in the journal Science Costello and colleagues report how they carried out a series of experiments involving 2,190 participants with a belief in conspiracy theories. While the experiments varied slightly, all participants were asked to describe a particular conspiracy theory they believed and the evidence they thought supported it. This was then fed into an AI system called "DebunkBot". Participants were also asked to rate on a 100-point scale how true they thought the conspiracy theory was. They then knowingly undertook a three-round back-and-forth with the AI system about their conspiracy theory or a non-conspiracy topic. Afterwards, participants once more rated how true they thought their conspiracy theory was. The results reveal those who discussed non-conspiracy topics only slightly lowered their "truth" rating afterwards. However those who discussed their conspiracy theory with AI showed, on average, a 20% drop in their belief that it was true. The team say the effects appeared to hold for at least two months, while the approach worked for almost all types of conspiracy theory - although not those that were true. The researchers add the size of the effect depended on factors including how important the belief was to the participant and their trust in AI. "About one in four people who began the experiment believing a conspiracy theory came out the other end without that belief," said Costello. "In most cases, the AI can only chip away - making people a bit more sceptical and uncertain - but a select few were disabused of their conspiracy entirely." The researchers add reducing belief in one conspiracy theory appeared to reduce participants' belief in other such ideas, at least to a small degree, while the approach could have applications in the real world - for example, AI could reply to posts relating to conspiracy theories on social media. Prof Sander van der Linden of the University of Cambridge, who was not involved in the work, questioned whether people would engage with such AI voluntarily in the real world. He also said it is unclear if similar results would be found if participants had chatted with an anonymous human, while there are also questions about how the AI is persuading conspiracy believers, given the system also uses strategies such as empathy and affirmation. But, he added: "Overall, it's a really novel and potentially important finding and a nice illustration of how AI can be leveraged to fight misinformation."
[21]
Can AI talk us out of conspiracy theories?
Cambridge, MA, Sept. 12, 2024 (GLOBE NEWSWIRE) -- Have you ever tried to convince a conspiracy theorist that the moon landing wasn't staged? You likely didn't succeed, but ChatGPT might have better luck, according to research by MIT Sloan School of Management professor David Rand and American University professor of psychology Thomas Costello, who conducted the research during his postdoctoral position at MIT Sloan. In a new paper "Durably reducing conspiracy beliefs through dialogues with AI" published in Science, the researchers show that large language models can effectively reduce individuals' beliefs in conspiracy theories -- and that these reductions last for at least 2 months -- a finding that offers new insights into the psychological mechanisms behind the phenomenon as well as potential tools to fight the spread of conspiracies. Going down the rabbit hole Conspiracy theories -- beliefs that certain events are the result of secret plots by influential actors -- have long been a subject of fascination and concern. Their persistence in the face of counter-evidence has led to the conclusion that they fulfill deep-seated psychological needs, rendering them impervious to facts and logic. According to this conventional wisdom, once someone "falls down the rabbit hole," it's virtually impossible to pull them back out. But for Rand, Costello, and their co-author professor Gordon Pennycook from Cornell University, who have conducted extensive research on the spread and uptake of misinformation, that conclusion didn't ring true. Instead, they suspected a simpler explanation was at play. "We wondered if it was possible that people simply hadn't been exposed to compelling evidence disproving their theories," Rand explained. "Conspiracy theories come in many varieties -- the specifics of the theory and the arguments used to support it differ from believer to believer. So if you are trying to disprove the conspiracy but haven't heard these particular arguments, you won't be prepared to rebut them." Effectively debunking conspiracy theories, in other words, would require two things: personalized arguments and access to vast quantities of information -- both now readily available through generative AI. Conspiracy conversations with GPT4 To test their theory, Costello, Pennycook, and Rand harnessed the power of GPT-4 Turbo, OpenAI's most advanced large language model, to engage over 2,000 conspiracy believers in personalized, evidence-based dialogues. The study employed a unique methodology that allowed for deep engagement with participants' individual beliefs. Participants were first asked to identify and describe a conspiracy theory they believed in using their own words, along with the evidence supporting their belief. GPT-4 Turbo then used this information to generate a personalized summary of the participant's belief and initiate a dialogue. The AI was instructed to persuade users that their beliefs were untrue, adapting its strategy based on each participant's unique arguments and evidence. These conversations, lasting an average of 8.4 minutes, allowed the AI to directly address and refute the specific evidence supporting each individual's conspiratorial beliefs, an approach that was impossible to test at scale prior to the technology's development. A significant -- and durable -- effect The results of the intervention were striking. On average, the AI conversations reduced the average participant's belief in their chosen conspiracy theory by about 20%, and about 1 in 4 participants -- all of whom believed the conspiracy beforehand -- disavowed the conspiracy after the conversation. This impact proved durable, with the effect remaining undiminished even two months post-conversation. The AI conversation's effectiveness was not limited to specific types of conspiracy theories. It successfully challenged beliefs across a wide spectrum, including conspiracies that potentially hold strong political and social salience, like those involving COVID-19 and fraud during the 2020 U.S. presidential election. While the intervention was less successful among participants who reported that the conspiracy was central to their worldview, it did still have an impact, with little variance across demographic groups. Notably, the impact of the AI dialogues extended beyond mere changes in belief. Participants also demonstrated shifts in their behavioral intentions related to conspiracy theories. They reported being more likely to unfollow people espousing conspiracy theories online, and more willing to engage in conversations challenging those conspiratorial beliefs. The opportunities and dangers of AI Costello, Pennycook, and Rand are careful to point to the need for continued responsible AI deployment since the technology could potentially be used to convince users to believe in conspiracies as well as to abandon them. Nevertheless, the potential for positive applications of AI to reduce belief in conspiracies is significant. For example, AI tools could be integrated into search engines to offer accurate information to users searching for conspiracy-related terms. "This research indicates that evidence matters much more than we thought it did -- so long as it is actually related to people's beliefs," Pennycook said. "This has implications far beyond just conspiracy theories: Any number of beliefs based on poor evidence could, in theory, be undermined using this approach." Beyond the specific findings of the study, its methodology also highlights the ways in which large language models could revolutionize social science research, said Costello, who noted that the researchers used GPT-4 Turbo to not only conduct conversations but also to screen respondents and analyze data. "Psychology research used to depend on graduate students interviewing or conducting interventions on other students, which was inherently limiting," Costello said. "Then, we moved to online survey and interview platforms that gave us scale but took away the nuance. Using artificial intelligence allows us to have both." These findings fundamentally challenge the notion that conspiracy believers are beyond the reach of reason. Instead, they suggest that many are open to changing their views when presented with compelling and personalized counter-evidence. "Before we had access to AI, conspiracy research was largely observation and correlational, which led to theories about conspiracies filling psychological needs," said Costello. "Our explanation is more mundane -- much of the time, people just didn't have the right information." Attachment Effectively debunking conspiracy theories, in other words, would require two things: personalized arguments and access to vast quantities of information -- both now readily available through generative AI. Matthew Aliberti MIT Sloan School of Management 7815583436 malib@mit.edu Market News and Data brought to you by Benzinga APIs
Share
Share
Copy Link
A new study reveals that an AI chatbot can effectively persuade people to reconsider their belief in conspiracy theories. The research, conducted by scientists from multiple institutions, shows promising results in combating misinformation.
In a groundbreaking study published in Science Advances, researchers have demonstrated that an artificial intelligence chatbot can successfully challenge and reduce belief in conspiracy theories 1. The study, conducted by scientists from University College London, MIT, and Google DeepMind, offers a potential new tool in the ongoing battle against misinformation and unfounded beliefs.
The AI chatbot, developed by the research team, employs a unique strategy to engage with individuals who hold conspiracy beliefs. Rather than directly contradicting or dismissing these theories, the chatbot uses a more nuanced approach. It asks probing questions, encourages critical thinking, and gently introduces factual information to help users reassess their beliefs 2.
The study involved over 1,000 participants who initially expressed belief in various conspiracy theories. After interacting with the AI chatbot, a significant portion of these individuals showed a decrease in their adherence to these unfounded beliefs. On average, participants' belief in conspiracy theories dropped by 5.4 points on a 100-point scale 3.
While the results are encouraging, the researchers emphasize the importance of ethical considerations in deploying such technology. The chatbot is designed to promote critical thinking rather than to push any particular agenda. However, concerns about the potential misuse of AI for manipulation remain a topic of discussion among experts 4.
The success of this AI chatbot opens up possibilities for its application in various fields, including education, public health, and online content moderation. However, researchers caution that the technology is not a silver bullet for combating misinformation. The long-term effects of such interventions and their scalability remain areas for further study 5.
Despite the chatbot's effectiveness, experts stress that human interaction and education remain crucial in addressing conspiracy beliefs. The AI tool is seen as a complement to, rather than a replacement for, human efforts in promoting critical thinking and media literacy 2.
The study's findings have significant implications for social media platforms and online information ecosystems. As misinformation continues to proliferate online, tools like this AI chatbot could potentially be integrated into social media platforms to help users critically evaluate the content they encounter 5.
Reference
[1]
[2]
[4]
[5]
A recent study suggests that AI-powered chatbots, like ChatGPT, may be effective in softening the beliefs of conspiracy theorists. The research indicates that engaging with AI could lead to more balanced views on controversial topics.
3 Sources
An exploration of the current state and future potential of AI-powered fake news detection systems, including the integration of neuroscience and behavioral science to enhance accuracy and personalization.
3 Sources
Researchers are exploring mathematical techniques to address the problem of AI chatbots generating false information. These approaches aim to make language models more reliable and truthful in their responses.
2 Sources
As AI technology advances, chatbots are being used in various ways, from playful experiments to practical applications in healthcare. This story explores the implications of AI's growing presence in our daily lives.
2 Sources
OpenAI has introduced a new version of ChatGPT with improved reasoning abilities in math and science. While the advancement is significant, it also raises concerns about potential risks and ethical implications.
15 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved