AI Research Tool Shows Social Media Algorithms Can Reduce Political Polarization by 3 Years in Days

Reviewed byNidhi Govil

5 Sources

Share

Stanford-led researchers developed an AI-powered browser extension that reranks X feeds without platform permission, demonstrating that downranking polarizing content reduced partisan animosity equivalent to three years of attitude change in just 10 days. The breakthrough tool enables independent field experiments on how social media algorithms shape political views during the 2024 election.

News article

Stanford Researchers Break New Ground in Social Media Algorithm Studies

A multidisciplinary team from Stanford University, University of Washington, and Northeastern University has developed an AI research tool that enables scientists to modify social media algorithms without requiring platform collaboration

1

2

. The AI-powered browser extension intercepts and modifies X (formerly Twitter) feeds in real time using Large Language Models (LLMs) to identify and rerank content based on experimentally controlled conditions. This approach opens a new paradigm for independent field experiments, allowing researchers to evaluate the causal effects of algorithmic content curation on user attitudes while preserving ecological validity

1

.

The tool addresses a critical gap in understanding how social media algorithms impact democratic discourse. "Social media algorithms directly impact our lives, but until now, only the platforms had the ability to understand and shape them," said Michael Bernstein, professor of computer science at Stanford's School of Engineering and the study's senior author

2

. Published in Science (journal), the research demonstrates that reranking social media feeds can measurably alter affective polarization without removing political posts entirely

3

.

Downranking Polarizing Content Reduces Partisan Animosity

The study focused on whether social media algorithms cause affective polarization by exposing users specifically to content that polarizes. Drawing from previous sociology research, the team identified eight categories of antidemocratic attitudes and partisan animosity as bipartisan threats to democratic functioning

1

2

. These include advocating for violence, calling for imprisonment of opposing party supporters, rejection of bipartisan cooperation, skepticism of facts favoring the other party's views, and willingness to forgo democratic principles

2

.

Tiziano Piccardi, the study's first author and now assistant professor at Johns Hopkins University, created the web extension tool that scans posts for these antidemocratic and extreme negative partisan sentiments

2

. The system reorders posts on users' feeds within seconds, moving polarizing content lower while keeping all posts visible

5

. This content-focused intervention proved more effective than previous blunt approaches like chronological ranking or stopping social media use altogether

2

.

Field Experiment During 2024 Election Shows Dramatic Results

Over 1,200 participants who consented to have their feeds modified used the tool for 10 days during the weeks before the 2024 election

2

5

. Researchers surveyed participants on their feelings toward the opposing party on a scale of 1 to 100 before and after the experiment. Among participants who had negative content downranked, attitudes improved on average by two points—equivalent to the estimated change in attitudes that has occurred among the general U.S. population over three years

2

4

.

"The change in their feed was barely perceptible, yet they reported a significant difference in how they felt about other people," Piccardi explained

4

. Most participants did not even notice their feeds had been modified

4

. The impact on political polarization was bipartisan, with effects consistent across party lines for people identifying as liberals or conservatives

2

5

.

Emotional Impact and User Engagement Trade-offs

Beyond reducing political polarization, downranking polarizing content also decreased participants' negative emotions while browsing. "This polarizing content can just hijack their attention by making people feel bad the moment they see it," said study co-author Jeanne Tsai, professor of psychology at Stanford School of Humanities and Sciences

2

. Participants exposed to less divisive content reported feeling less angry and sad while using the platform

3

5

.

The research revealed a practical trade-off for platforms. While there was a slight reduction in overall user engagement in terms of time spent and posts viewed when divisive content was down-ranked, those users tended to like or repost more often

4

. This finding challenges the assumption that platforms must choose between engagement-based algorithms and purely chronological feeds, suggesting intermediate approaches exist depending on optimization goals

3

.

Implications for Platform Design and User Control

The study demonstrates that AI-driven algorithms powered by Large Language Models finally provide platforms with technical means to detect polarizing content affecting users' democratic attitudes

3

. "What's exciting about these results is that there is something that the platforms can do to reduce political polarization," said Martin Saveski, assistant professor at the University of Washington information school . The change demonstrates the power of the algorithm to significantly influence users' feelings of polarization and boost societal division or harmony

4

.

The tool could enable interventions that not only mitigate partisan animosity but also promote greater social trust and healthier democratic discourse across party lines

2

5

. Researchers have made the code available, opening possibilities for user control over their own social media algorithms

2

. This aligns with growing interest in giving users more control to decide what principles should guide their feeds, with platforms including Bluesky and X moving in this direction

3

.

Future Research and Open Questions

The team plans to investigate long-term effects of these interventions and test new ranking objectives to address other risks to online well-being, including mental health and life satisfaction

3

. Future work will explore how to balance multiple goals such as cultural context, personal values, and user control to create online spaces that better support healthy social and civic interaction

3

. The study noted limitations, as the tool was only accessible to those logged into X on a browser rather than an app, and did not measure long-term impacts of seeing less polarizing content

5

. The research represents a first step toward designing algorithms aware of their potential social impact, with many questions remaining about how platforms might implement such changes without platform collaboration

1

3

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo