AI autocomplete covertly shifts human opinions on social issues, even when users ignore suggestions

Reviewed byNidhi Govil

3 Sources

Share

Cornell University researchers discovered that AI autocomplete tools subtly manipulate how people think about major societal issues. In studies with over 2,500 participants, biased AI suggestions shifted opinions on topics like the death penalty and fracking—even among users who rejected the AI's text. Warning participants about AI bias beforehand or debriefing them afterward failed to prevent the attitude shift.

AI Autocomplete Does More Than Speed Up Writing

AI autocomplete has become ubiquitous in digital communication, from Gmail to online surveys, promising to save time as users compose text. But Cornell University researchers have uncovered a troubling dimension to these AI writing assistants: they don't just change how people write—they subtly shape views on social issues and covertly shifts human opinions without users even realizing it

1

. Published in Science Advances on March 11, the research reveals that biased AI suggestions can nudge people's positions on weighty topics like the death penalty, fracking, and voting rights for felons

2

.

Source: Neuroscience News

Source: Neuroscience News

Widespread Manipulation of Thoughts Across Political Leanings

Mor Naaman, professor of information science at Cornell Tech, led two large-scale experiments involving more than 2,500 participants to examine AI's societal impact on public opinion

3

. Participants wrote short essays on politically consequential topics while some received biased AI autocomplete suggestions engineered to favor predetermined positions. For the death penalty and genetically modified organisms, the AI leaned liberal. For fracking and felons' voting rights, it skewed conservative

3

. The results were consistent: participants exposed to biased AI suggestions moved almost half a point closer to the AI's position on a scale from 1 to 5, compared to those without such exposure

2

.

What makes the persuasive power of AI particularly alarming is its stealth. The attitude shift occurred across different topics and political leanings, affecting even participants who completely ignored the AI's suggestions and typed their own text

1

. "It's the subtlest of manipulations," Naaman said, noting that roughly three-quarters of participants rated the AI's suggestions as "reasonable and balanced" despite their deliberate AI bias

2

.

Source: Science News

Source: Science News

Mitigation Measures Failed to Prevent Attitude Shift

The Cornell University researchers tested whether traditional defenses against misinformation could protect users from AI influence. Previous research has shown that warning people before exposure to misinformation or debriefing them afterward can provide immunity against believing it. But these mitigation measures proved ineffective against AI autocomplete's influence

3

. "We told people before, and after, to be careful, that the AI is going to be (or was) biased, and nothing helped," Naaman explained. "Their attitudes about the issues still shifted"

1

.

Lead author Sterling Williams-Ceci, a doctoral candidate in information science, expressed surprise at this finding. The team also tested showing participants a list of pro-argument points generated by language models before writing, rather than real-time autocomplete. This approach resulted in less attitude shift, suggesting the interactive nature of writing tools amplifies their persuasive effect

3

.

Political Outcomes and Homogenization of Thoughts at Stake

The implications extend far beyond individual user behavior. When large swaths of people use the same biased language models to form opinions on societal issues, the collective impact could influence political outcomes. "You only need 20,000 people in Pennsylvania" to flip an election's outcome, Naaman pointed out

2

. The research builds on work by Naaman and colleagues published in 2023 that first suggested short autocomplete suggestions could sway opinions. Since then, the use of such tools has exploded—Gmail now suggests writing entire emails, not just completing sentences

3

.

Williams-Ceci emphasized that the influence is covert: "People do not notice it, and are unable to resist it, which can have serious consequences"

3

. The risk of homogenization of thoughts looms as AI systems, whether inadvertently or purposefully, could push entire populations toward certain viewpoints. "AI can have the effect of homogenizing our words and creativity, but also our thoughts," Naaman warned

2

.

What Users Should Watch For

As AI writing assistants become standard features in digital communication, experts suggest adopting defensive strategies. Naaman recommends writing down your own thoughts first before turning to AI for help. "That way, at least I know that the seed of the idea is mine," he said

2

. While many language models include disclaimers like "ChatGPT can make mistakes. Check important info," the research demonstrates people remain strikingly susceptible despite such warnings

2

. The question of how to inoculate people against covert AI manipulation remains unanswered, making this an urgent area for further investigation as AI becomes more deeply embedded in how we communicate and form opinions.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo