2 Sources
[1]
UW study: Politically persuasive AI chatbots offer potential benefits -- and worrying influence
If you've faced the frustrating challenge of trying to pull a friend or family member with opposing political views into your camp, maybe let a chatbot make your case. New research from the University of Washington found that politically biased chatbots could nudge Democrats and Republicans toward opposing viewpoints. But the study reveals a more concerning implication: bias embedded in the large language models that power these chatbots can unknowingly influence people's opinions, potentially affecting voting and policy decisions. "[It] is kind of like two sides of a coin. On one hand, we're saying that these models affect your decision making downstream. But on the other hand ... this may be an interesting tool to bridge political divide," said author Jillian Fisher, a UW doctoral student in statistics and in the Paul G. Allen School of Computer Science & Engineering. Fisher and her colleagues presented their findings on July 28 at the Association for Computational Linguistics in Vienna, Austria. The underlying question that the researchers set out to answer was whether bias in LLMs can shape public opinions -- just as political bias in news outlets can. The issue is of growing importance as people are increasingly turning to AI chatbots for information gathering and decision-making. While engineers don't necessarily set out to build biased models, the technology is trained on information of varying quality and the many decisions made by model designers can skew the LLMs, Fisher said. The researchers recruited 299 participants (150 Republicans, 149 Democrats) for two experiments designed to measure the influence of biased AI. The study used ChatGPT given its widespread usage. In one test, they asked participants about their opinions on four obscure political issues: covenant marriage, unilateralism, multifamily zoning and the Lacey Act of 1900, which restricts the import of environmentally dangerous plants and animals. Participants were then allowed to engage with ChatGPT to better inform their stance and then asked again for their opinion of the issue. In the other test, participants played the role of a city mayor, allocating a $100 budget for education, welfare, public safety and veteran services. Then they shared their budget decisions with the chatbot, discussed the allocations, and redistributed the funds. The variable in the study was that ChatGPT was either operating from a neutral perspective, or was instructed by the researchers to respond as a "radical left U.S. Democrat" or a "radical right U.S. Republican." The biased chatbots successfully influenced participants regardless of their political affiliation, pulling them toward the LLM's assigned perspective. For example, Democrats allocated more funds for public safety after consulting with conservative-leaning bots, while Republicans budgeted more for education after interacting with liberal versions. Republicans did not move further right to a statistically significant degree, likely due to what researchers called a "ceiling effect" -- meaning they had little room to become more conservative. The study dug deeper to characterize how the model responded and what strategies were most effective. ChatGPT used a combination of persuasion -- such as appealing to fear, prejudice and authority or using loaded language and slogans -- and framing, which includes making arguments based on health and safety, fairness and equality, and security and defense. Interestingly, the framing arguments proved more impactful than persuasion. The results confirmed the suspicion that the biased bots could influence opinions, Fisher said. "What was surprising for us is that it also illuminated ways to mitigate this bias." The study found that people who had some prior understanding of artificial intelligence were less impacted by the opinionated bots. That suggests more widespread, intentional AI education can help users guard against that influence by making them aware of potential biases in the technology, Fisher said. "AI education could be a robust way to mitigate these effects," Fisher said. "Regardless of what we do on the technical side, regardless of how biased the model is or isn't, you're protecting yourself. This is where we're going in the next study that we're doing." Additional authors of the research are the UW's Katharina Reinecke, Yulia Tsvetkov, Shangbin Feng, Thomas Richardson and Daniel W. Fisher; Stanford University's Yejin Choi and Jennifer Pan; and Robert Aron of ThatGameCompany. The study was peer-reviewed for the conference, but has not been published in an academic journal.
[2]
With Just a Few Messages, Biased AI Chatbots Swayed People's Political Views | Newswise
Newswise -- If you've interacted with an artificial intelligence chatbot, you've likely realized that all AI models are biased. They were trained on enormous corpuses of unruly data and refined through human instructions and testing. Bias can seep in anywhere. Yet how a system's biases can affect users is less clear. So a University of Washington study put it to the test. A team of researchers recruited self-identifying Democrats and Republicans to form opinions on obscure political topics and decide how funds should be doled out to government entities. For help, they were randomly assigned three versions of ChatGPT: a base model, one with liberal bias and one with conservative bias. Democrats and Republicans were both more likely to lean in the direction of the biased chatbot they talked with than those who interacted with the base model. For example, people from both parties leaned further left after talking with a liberal-biased system. But participants who had higher self-reported knowledge about AI shifted their views less significantly -- suggesting that education about these systems may help mitigate how much chatbots manipulate people. The team presented its research July 28 at the Association for Computational Linguistics in Vienna, Austria. "We know that bias in media or in personal interactions can sway people," said lead author Jillian Fisher, a UW doctoral student in statistics and in the Paul G. Allen School of Computer Science & Engineering. "And we've seen a lot of research showing that AI models are biased. But there wasn't a lot of research showing how it affects the people using them. We found strong evidence that, after just a few interactions and regardless of initial partisanship, people were more likely to mirror the model's bias." In the study, 150 Republicans and 149 Democrats completed two tasks. For the first, participants were asked to develop views on four topics -- covenant marriage, unilateralism, the Lacey Act of 1900 and multifamily zoning -- that many people are unfamiliar with. They answered a question about their prior knowledge and were asked to rate on a seven-degree scale how much they agreed with statements such as "I support keeping the Lacey Act of 1900." Then they were told to interact with ChatGPT 3 to 20 times about the topic before they were asked the same questions again. For the second task, participants were asked to pretend to be the mayor of a city. They had to distribute extra funds among four government entities typically associated with liberals or conservatives: education, welfare, public safety and veteran services. They sent the distribution to ChatGPT, discussed it and then redistributed the sum. Across both tests, people averaged five interactions with the chatbots. Researchers chose ChatGPT because of its ubiquity. To clearly bias the system, the team added an instruction that participants didn't see, such as "respond as a radical right U.S. Republican." As a control, the team directed a third model to "respond as a neutral U.S. citizen." A recent study of 10,000 users found that they thought ChatGPT, like all major large language models, leans liberal. The team found that the explicitly biased chatbots often tried to persuade users by shifting how they framed topics. For example, in the second task, the conservative model turned a conversation away from education and welfare to the importance of veterans and safety, while the liberal model did the opposite in another conversation. "These models are biased from the get-go, and it's super easy to make them more biased," said co-senior author Katharina Reinecke, a UW professor in the Allen School. "That gives any creator so much power. If you just interact with them for a few minutes and we already see this strong effect, what happens when people interact with them for years?" Since the biased bots affected people with greater knowledge of AI less significantly, researchers want to look into ways that education might be a useful tool. They also want to explore the potential long-term effects of biased models and expand their research to models beyond ChatGPT. "My hope with doing this research is not to scare people about these models," Fisher said. "It's to find ways to allow users to make informed decisions when they are interacting with them, and for researchers to see the effects and research ways to mitigate them."
Share
Copy Link
A University of Washington study finds that politically biased AI chatbots can influence people's opinions on various issues, regardless of their initial political affiliation. The research also suggests that AI education could help mitigate these effects.
A groundbreaking study from the University of Washington has revealed that AI chatbots, particularly those with embedded political biases, can significantly influence people's opinions on various political issues. The research, presented at the Association for Computational Linguistics in Vienna, Austria, highlights both the potential benefits and concerning implications of this technology 1.
Source: newswise
Researchers recruited 299 participants, evenly split between self-identified Republicans and Democrats. The study utilized ChatGPT, given its widespread usage, and created three versions: a neutral model, a liberal-biased model, and a conservative-biased model 2.
Participants engaged in two experiments:
The study revealed that biased chatbots successfully influenced participants regardless of their initial political affiliation. Democrats allocated more funds for public safety after consulting conservative-leaning bots, while Republicans budgeted more for education after interacting with liberal versions 1.
Interestingly, the research found that framing arguments based on health and safety, fairness and equality, and security and defense proved more impactful than direct persuasion techniques 1.
Source: GeekWire
A crucial finding of the study was that participants with prior knowledge of AI were less influenced by the biased chatbots. This suggests that widespread, intentional AI education could help users guard against undue influence by making them aware of potential biases in the technology 2.
The study's findings have significant implications for the use of AI in information gathering and decision-making processes. As lead author Jillian Fisher noted, "It is kind of like two sides of a coin. On one hand, we're saying that these models affect your decision making downstream. But on the other hand ... this may be an interesting tool to bridge political divide" 1.
Researchers are now focusing on exploring the potential long-term effects of biased models and expanding their research to other AI models beyond ChatGPT. They emphasize the importance of allowing users to make informed decisions when interacting with these systems 2.
The ease with which AI models can be biased raises important ethical questions. As co-senior author Katharina Reinecke pointed out, "These models are biased from the get-go, and it's super easy to make them more biased. That gives any creator so much power" 2.
This research underscores the need for continued study and vigilance as AI chatbots become increasingly integrated into our information ecosystems and decision-making processes.
Databricks raises $1 billion in a new funding round, valuing the company at over $100 billion. The data analytics firm plans to invest in AI database technology and an AI agent platform, positioning itself for growth in the evolving AI market.
11 Sources
Business
14 hrs ago
11 Sources
Business
14 hrs ago
SoftBank makes a significant $2 billion investment in Intel, boosting the chipmaker's efforts to regain its competitive edge in the AI semiconductor market.
22 Sources
Business
22 hrs ago
22 Sources
Business
22 hrs ago
OpenAI introduces ChatGPT Go, a new subscription plan priced at ₹399 ($4.60) per month exclusively for Indian users, offering enhanced features and affordability to capture a larger market share.
15 Sources
Technology
22 hrs ago
15 Sources
Technology
22 hrs ago
Microsoft introduces a new AI-powered 'COPILOT' function in Excel, allowing users to perform complex data analysis and content generation using natural language prompts within spreadsheet cells.
8 Sources
Technology
14 hrs ago
8 Sources
Technology
14 hrs ago
Adobe launches Acrobat Studio, integrating AI assistants and PDF Spaces to transform document management and collaboration, marking a significant evolution in PDF technology.
10 Sources
Technology
14 hrs ago
10 Sources
Technology
14 hrs ago