5 Sources
5 Sources
[1]
Reranking partisan animosity in algorithmic social media feeds alters affective polarization
Social media algorithms profoundly impact our lives: They curate what we see (1) in ways that can shape our opinions (2-4), moods (5-7), and actions (8-12). Because of the power that these ranking algorithms have to direct our attention, the research literature has articulated many theories and results detailing the impact that ranking algorithms have on us (13-17). However, validating these theories and results has remained extremely difficult because the ranking algorithm behavior is determined by the social media platforms, and only the platforms themselves can test alternative feed designs and causally assess their impact. Platforms, however, face political and financial pressures that constrain the kinds of experiments they can launch and share (18). Concerns about lawsuits and the need to preserve engagement-driven revenue further limit what platforms are willing to test, leaving massive gaps in the design space of ranking algorithms that have been explored in naturalistic settings and at scale. To address this gap, we present an approach that enables researchers to rerank participants' social media feeds in real time as they browse, without requiring platform permission or cooperation. We built a browser extension, a small add-on to a web browser that modifies how web pages appear or behave, similar to an ad blocker. Our extension intercepts and modifies X's web-based feed in real time and reranks the feed using large language model (LLM)-based rescoring, with only a negligible increase in page load time. This web extension allows us to rerank content according to experimentally controlled conditions. The design opens a new paradigm for algorithmic experimentation: It provides external researchers with a tool for conducting independent field experiments and evaluating the causal effects of algorithmic content curation on user attitudes and behaviors while preserving ecological validity. This capability allowed us to investigate a pressing question: Can feed algorithms cause affective polarization, i.e., hostility toward opposing political parties (19-22)? This concern has grown since the 2016 US presidential election (23), and the debate remains ongoing after the 2020 and 2024 elections. If social media algorithms are causing affective polarization, they might not only bear responsibility for rising political incivility online (24), they might also pose a risk to trust in democratic institutions (25). In this case, isolating the algorithmic design choices that cause polarization could offer alternative algorithmic approaches (26). A major hypothesized mechanism for how feed algorithms cause polarization is a self-reinforcing engagement loop: users engage with content aligning with their political views, the feed algorithm interprets this engagement as a positive signal, and the algorithm exposes even more politically aligned content to users, leading to a polarizing cycle. Some studies support this hypothesis, finding that online interactions exacerbate polarization (27), potentially because of the increased visibility of hostile political discussions (28), divisive language (29-33), and content that reinforces existing beliefs (34). However, large-scale field experiments aimed at reducing polarization by intervening on the feed algorithm -- for example, by increasing exposure to out-party content -- have found both a decrease (35) and an increase (36) in polarization. Similarly, recent large-scale experiments on Facebook and Instagram found no evidence that reduced exposure to in-party sources or a simpler reverse-chronological algorithm affected polarization and political attitudes (23, 37) during the 2020 US election. These mixed results reveal the difficulty in identifying what, if any, algorithmic intervention might help reduce polarization, especially during politically charged times. We distilled the goals of these prior interventions to a direct hypothesis that we could operationalize through real-time LLM reranking: that feed algorithms cause affective polarization by exposing users specifically to content that polarizes. An algorithm that up-ranks content reflecting genuine political dialogue is less likely to polarize than one that up-ranks demagoguery. This content-focused hypothesis has been difficult to operationalize into interventions, making studies that intervene on cross-partisan exposure and reverse-chronological ranking attractive but more diffuse in their impact and thus more likely to observe mixed results. However, by connecting our real-time reranking infrastructure with recent advances in LLMs, we could create a ranking intervention that more directly targets the focal hypothesis (38) without needing platform collaboration. We drew, in particular, on a recent large-scale field experiment that articulated eight categories of antidemocratic attitudes and partisan animosity as bipartisan threats to the healthy functioning of democracy (39). We operationalized these eight categories into an artificial intelligence (AI) classifier that labels expressions of these constructs in social media posts, does so with accuracy comparable to trained annotators, and produces depolarization effects in a lab setting on a fixed feed (40). This real-time classification enabled us to perform a scalable, content-based reranking experiment on participants' own feeds in the field (41). We conducted a preregistered field experiment on X, the most used social media platform for political discourse in the US (42), using our extension to dynamically rerank participants' social media content by either increasing or decreasing exposure to content that expresses these eight factors of antidemocratic attitudes and partisan animosity (AAPA) over the course of a week. The experiment was conducted during a pivotal moment in the 2024 US election cycle, from July to August 2024, an important period for understanding how social media feeds impact affective polarization. Major political events during the study period included the attempted assassination of Donald Trump, the withdrawal of Joe Biden from the 2024 presidential race, and the nomination of Kamala Harris as the Democratic Party's candidate. These events allow us to examine the impact of heterogeneous AAPA content on partisan polarization and hostility. We measured the intervention's effect on affective polarization (43) and emotional experience (44). Compared with control conditions that did not rerank the feed, decreased AAPA exposure led to warmer feelings toward the political outgroup, whereas increased AAPA exposure led to colder feelings. These changes also affected participants' levels of negative emotions (anger and sadness) as measured through in-feed surveys.
[2]
Social media research tool lowers the political temperature
This development opens a path for researchers and individual users to have more control over what they see on social media. A new tool shows it is possible to turn down the partisan rancor in an X feed - without removing political posts and without the direct cooperation of the platform. The Stanford-led research, published in Science, also indicates that it may one day be possible to let users take control of their own social media algorithms. A multidisciplinary team created a seamless, web-based tool that reorders content to move posts lower in a user's feed when they contain antidemocratic attitudes and partisan animosity, such as advocating for violence or jailing supporters of the opposing party. In an experiment using the tool with about 1,200 participants over 10 days during the 2024 election, those who had antidemocratic content downranked showed more positive views of the opposing party. The effect was also bipartisan, holding true for people who identified as liberals or conservatives. "Social media algorithms directly impact our lives, but until now, only the platforms had the ability to understand and shape them," said Michael Bernstein, a professor of computer science in Stanford's School of Engineering and the study's senior author. "We have demonstrated an approach that lets researchers and end users have that power." The tool could also open ways to create interventions that not only mitigate partisan animosity, but also promote greater social trust and healthier democratic discourse across party lines, added Bernstein, who is also a senior fellow at the Stanford Institute for Human-Centered Artificial Intelligence. For this study, the team drew from previous sociology research from Stanford, identifying categories of antidemocratic attitudes and partisan animosity that can be threats to democracy. In addition to advocating for extreme measures against the opposing party, these attitudes include statements that show rejection of any bipartisan cooperation, skepticism of facts that favor the other party's views, and a willingness to forgo democratic principles to help the favored party. Preventing emotional hijacking There is often an immediate, unavoidable emotional response to seeing this kind of content, said study co-author Jeanne Tsai. "This polarizing content can just hijack their attention by making people feel bad the moment they see it," said Tsai, a professor of psychology in the Stanford School of Humanities and Sciences. The study brought together researchers from University of Washington and Northeastern, as well as Stanford, to tackle the problem from a range of disciplines, including computer science, psychology, information science, and communication. The study's first author, Tiziano Piccardi, a former postdoctoral fellow in Bernstein's lab, created a web extension tool coupled with an artificial intelligence large language model that scans posts for these types of antidemocratic and extreme negative partisan sentiments. The tool then re-orders posts on the user's X feed in a matter of seconds. Then, in separate experiments, the researchers had a group of participants, who consented to have their feeds modified, view X with this type of content downranked or upranked over 10 days, and compared their reactions to a control group. No posts were removed, but the more incendiary political posts appeared lower or higher in their content streams. The impact on polarization was clear, said Piccardi, who is now an assistant professor of computer science at Johns Hopkins University. "When the participants were exposed to less of this content, they felt warmer toward the people of the opposing party," he said. "When they were exposed to more, they felt colder." Small change with a potentially big impact Before and after the experiment, the researchers surveyed participants on their feelings toward the opposing party on a scale of 1 to 100. Among the participants who had the negative content downranked, their attitudes improved on average by two points - equivalent to the estimated change in attitudes that has occurred among the general U.S. population over a period of three years. Previous studies on social media interventions to mitigate this kind of polarization have shown mixed results. Those interventions have also been rather blunt instruments, the researchers said, such as ranking posts chronologically or stopping social media use altogether. This study shows that a more nuanced approach is possible and effective, Piccardi said. It can also give people more control over what they see, and that might help improve their social media experience overall since downranking this content not only decreased participants' polarization but also their feelings of anger and sadness. The researchers are now looking into other interventions using a similar method, including ones that aim to improve mental health. The team has also made the code of the current tool available, so other researchers and developers can use it to create their own ranking systems independent of a social media platform's algorithm.
[3]
Down-ranking polarizing content lowers emotional temperature on social media - new research
This research was partially supported by a Hoffman-Yee grant from the Stanford Institute for Human-Centered Artificial Intelligence. Reducing the visibility of polarizing content in social media feeds can measurably lower partisan animosity. To come up with this finding, my colleagues and I developed a method that let us alter the ranking of people's feeds, previously something only the social media companies could do. Reranking social media feeds to reduce exposure to posts expressing anti-democratic attitudes and partisan animosity affected people's emotions and their views of people with opposing political views. I'm a computer scientist who studies social computing, artificial intelligence and the web. Because only social media platforms can modify their algorithms, we developed and released an open-source web tool that allowed us to rerank the feeds of consenting participants on X, formerly Twitter, in real time. Drawing on social science theory, we used a large language model to identify posts likely to polarize people, such as those advocating political violence or calling for the imprisonment of members of the opposing party. These posts were not removed; they were simply ranked lower, requiring users to scroll further to see them. This reduced the number of those posts users saw. We ran this experiment for 10 days in the weeks before the 2024 U.S. presidential election. We found that reducing exposure to polarizing content measurably improved participants' feelings toward people from the opposing party and reduced their negative emotions while scrolling their feed. Importantly, these effects were similar across political affiliations, suggesting that the intervention benefits users regardless of their political party. Why it matters A common misconception is that people must choose between two extremes: engagement-based algorithms or purely chronological feeds. In reality, there is a wide spectrum of intermediate approaches depending on what they are optimized to do. Feed algorithms are typically optimized to capture your attention, and as a result, they have a significant impact on your attitudes, moods and perceptions of others. For this reason, there is an urgent need for frameworks that enable independent researchers to test new approaches under realistic conditions. Our work offers a path forward, showing how researchers can study and prototype alternative algorithms at scale, and it demonstrates that, thanks to large language models, platforms finally have the technical means to detect polarizing content that can affect their users' democratic attitudes. What other research is being done in this field Testing the impact of alternative feed algorithms on live platforms is difficult, and such studies have only recently increased in number. For instance, a recent collaboration between academics and Meta found that changing the algorithmic feed to a chronological one was not sufficient to show an impact on polarization. A related effort, the Prosocial Ranking Challenge led by researchers at the University of California, Berkeley, explores ranking alternatives across multiple platforms to promote beneficial social outcomes. At the same time, the progress in large language model development enables richer ways to model how people think, feel and interact with others. We are seeing growing interest in giving users more control, allowing people to decide what principles should guide what they see in their feeds - for example the Alexandria library of pluralistic values and the Bonsai feed reranking system. Social media platforms, including Bluesky and X, are heading this way, as well. What's next This study represents our first step toward designing algorithms that are aware of their potential social impact. Many questions remain open. We plan to investigate the long-term effects of these interventions and test new ranking objectives to address other risks to online well-being, such as mental health and life satisfaction. Future work will explore how to balance multiple goals, such as cultural context, personal values and user control, to create online spaces that better support healthy social and civic interaction. The Research Brief is a short take on interesting academic work.
[4]
Partisan X posts boost political polarisation among users, research finds
Study shows small changes to tone of posts in 'for you' feed increase unfavourable feelings towards political opponents Small changes to the tone of posts fed to users of X can increase feelings of political polarisation as much in a week as would have historically taken at least three years, research has found. A groundbreaking experiment to gauge the potency of Elon Musk's social platform to increase political division found that when posts expressing anti-democratic attitudes and partisan animosity were boosted, even barely perceptibly, in the feeds of Democrat and Republican supporters there was a large change in their unfavourable feelings towards the other side. The degree of increased division - known as "affective polarisation" - achieved in one week by the changes the academics made to X users' feeds was as great as would have on average taken three years between 1978 and 2020. Most of the more than 1,000 users who took part in the experiment during the 2024 US presidential election did not notice that the tone of their feed had been changed. The campaign was marked by divisive viral posts on X, including a fake image of Kamala Harris cosying up to Jeffrey Epstein at a gala and an AI-generated image posted by Musk of Kamala Harris dressed as a communist dictator that had 84m views. Repeated exposure to posts expressing antidemocratic attitudes and partisan animosity "significantly influences" users' feelings of polarisation and boosts sadness and anger, they found. Musk bought Twitter in 2022, rebranded it X and introduced the "for you" feed, which instead of only showing posts relating to accounts users actively follow, uprates content calculated to maximise engagement. The extent to which more antidemocratic posts make users feel greater animosity towards political opponents "demonstrates the power of the algorithm", said Martin Saveski, assistant professor at the University of Washington information school, who, with colleagues at the universities of Stanford, Johns Hopkins, and Northeastern, produced the study published in the journal Science. "The change in their feed was barely perceptible, yet they reported a significant difference in how they felt about other people," added Tiziano Piccardi, assistant professor at the Johns Hopkins University computer science department and co-author of the research. "Based on US trends, that shift corresponds to roughly three years of polarisation." The study also found that relatively subtle changes to the content of users' feeds can significantly reduce political animosity among Republicans and Democrats, suggesting X had the power to increase political harmony if Musk chose to use it in that way. "What's exciting about these results is that there is something that the platforms can do to reduce polarisation," said Saveski. "It's a new approach they could take in designing their algorithms." X was approached for comment. Eight in 10 American adults say that not only can Republicans and Democrats not agree on policies and plans, but they cannot agree on basic facts, according to Pew research. More than half of people in the UK believe the differences in people's political views are so divisive it is dangerous for society, recent polling by Ipsos found. The changes in political polarisation resulting from exposure to X posts were measured using a novel approach. First, the academics used AI to analyse posts in X's "for you" feed in real time. Then the system showed more divisive posts to one cohort and fewer divisive posts to another, a power normally the sole preserve of X. Divisive posts included those that showed support for undemocratic practices, partisan violence, opposition to bipartisan consensus and biased evaluations of politicised facts. After a week of reading these subtly altered feeds, the researchers asked users to rank how warm or cold, favourable or unfavourable they felt towards their political opponents. The changes in "affective polarisation" were ranked at more than two degrees on a 0 to 100 degree "feeling thermometer". This was the same amount of increased polarisation that typically occurred in the US in the four decades to 2020. Feeding users fewer posts with antidemocratic attitudes and partisan animosity decreased political division by a similar amount. Social media platforms have long been accused of encouraging divisive content to boost user engagement and therefore advertising revenues. But the research found that while there was a slight reduction in overall engagement in terms of time spent on the platform and numbers of posts viewed when divisive content was down-ranked, those users tended to "like" or repost more often. "The success of this method shows that it can be integrated into social media AI to mitigate harmful personal and societal consequences," the authors wrote. "At the same time, our engagement analyses indicate a practical trade-off: interventions that down-rank [antidemocratic and partisan content] may reduce short-term engagement volume, posing challenges for engagement-driven business models and supporting the hypothesis that content that provokes strong reactions generates more engagement."
[5]
Social media algorithms can alter political views, study says
A browser extension powered by AI can reduce how people feel about opposing political views, according to new research that looked into the 2024 US presidential election. Researchers in the United States have developed a new tool that allows independent scientists to study how social media algorithms affect users -- without needing permission from the platforms themselves. The findings suggest that platforms could reduce political polarisation by down-ranking hostile content in their algorithms. The tool, a browser extension powered by artificial intelligence (AI), scans posts on X, formerly Twitter, for any themes of anti-democratic and extremely negative partisan views, such as posts that could call for violence or jailing supporters of an opposing party. It then re-orders posts on the X feed in a "matter of seconds," the study showed, so the polarising content was nearer to the bottom of a user's feed. The team of researchers from Stanford University, the University of Washington, and Northeastern University then tested the browser extension on the X feeds of over 1,200 participants who consented to having them modified for 10 days in the lead-up to the 2024 US presidential election. Some of the participants used the browser extension that showed more divisive content, and the rest used the one that demoted it to a lower position on the feed. The results were published in the journal Science on Thursday. The researchers asked participants to rate their feelings about the opposing political party on a scale of 1 to 100 during the experiment. For the participants whoused the browser tool, their attitudes towards the opposing party improved on average by two points, which is the estimated change in attitude from the American public in three years. "These changes were comparable in size to 3 years of change in United States affective polarisation," the researchers noted. The results were bipartisan, meaning the effects were consistent across party lines for people with liberal and conservative views. Tiziano Piccardi, assistant professor of computer science at Johns Hopkins University, said the tool has a "clear" impact on polarisation. "When the participants were exposed to less of this content, they felt warmer toward the people of the opposing party,"he said in a statement."When they were exposed to more, they felt colder". The researchers note that this could be a new way of reranking social media accounts "without platform collaboration". "These interventions may result in algorithms that not only reduce partisan animosity but also promote greater social trust and healthier democratic discourse across party lines," the study concluded. The study also looked into emotional responses and found that participants who reduced hostile content reported feeling less angry and sad while using the platform. But the emotional effects didn't continue after the study ended. The researchers wrote that their study was only accessible to those logged in to X on a browser, not an app, which could limit the effects. Their study also did not measure the long-term impact that seeing less polarising content could have on X users.
Share
Share
Copy Link
Stanford-led researchers developed an AI-powered browser extension that reranks X feeds without platform permission, demonstrating that downranking polarizing content reduced partisan animosity equivalent to three years of attitude change in just 10 days. The breakthrough tool enables independent field experiments on how social media algorithms shape political views during the 2024 election.

A multidisciplinary team from Stanford University, University of Washington, and Northeastern University has developed an AI research tool that enables scientists to modify social media algorithms without requiring platform collaboration
1
2
. The AI-powered browser extension intercepts and modifies X (formerly Twitter) feeds in real time using Large Language Models (LLMs) to identify and rerank content based on experimentally controlled conditions. This approach opens a new paradigm for independent field experiments, allowing researchers to evaluate the causal effects of algorithmic content curation on user attitudes while preserving ecological validity1
.The tool addresses a critical gap in understanding how social media algorithms impact democratic discourse. "Social media algorithms directly impact our lives, but until now, only the platforms had the ability to understand and shape them," said Michael Bernstein, professor of computer science at Stanford's School of Engineering and the study's senior author
2
. Published in Science (journal), the research demonstrates that reranking social media feeds can measurably alter affective polarization without removing political posts entirely3
.The study focused on whether social media algorithms cause affective polarization by exposing users specifically to content that polarizes. Drawing from previous sociology research, the team identified eight categories of antidemocratic attitudes and partisan animosity as bipartisan threats to democratic functioning
1
2
. These include advocating for violence, calling for imprisonment of opposing party supporters, rejection of bipartisan cooperation, skepticism of facts favoring the other party's views, and willingness to forgo democratic principles2
.Tiziano Piccardi, the study's first author and now assistant professor at Johns Hopkins University, created the web extension tool that scans posts for these antidemocratic and extreme negative partisan sentiments
2
. The system reorders posts on users' feeds within seconds, moving polarizing content lower while keeping all posts visible5
. This content-focused intervention proved more effective than previous blunt approaches like chronological ranking or stopping social media use altogether2
.Over 1,200 participants who consented to have their feeds modified used the tool for 10 days during the weeks before the 2024 election
2
5
. Researchers surveyed participants on their feelings toward the opposing party on a scale of 1 to 100 before and after the experiment. Among participants who had negative content downranked, attitudes improved on average by two points—equivalent to the estimated change in attitudes that has occurred among the general U.S. population over three years2
4
."The change in their feed was barely perceptible, yet they reported a significant difference in how they felt about other people," Piccardi explained
4
. Most participants did not even notice their feeds had been modified4
. The impact on political polarization was bipartisan, with effects consistent across party lines for people identifying as liberals or conservatives2
5
.Beyond reducing political polarization, downranking polarizing content also decreased participants' negative emotions while browsing. "This polarizing content can just hijack their attention by making people feel bad the moment they see it," said study co-author Jeanne Tsai, professor of psychology at Stanford School of Humanities and Sciences
2
. Participants exposed to less divisive content reported feeling less angry and sad while using the platform3
5
.The research revealed a practical trade-off for platforms. While there was a slight reduction in overall user engagement in terms of time spent and posts viewed when divisive content was down-ranked, those users tended to like or repost more often
4
. This finding challenges the assumption that platforms must choose between engagement-based algorithms and purely chronological feeds, suggesting intermediate approaches exist depending on optimization goals3
.Related Stories
The study demonstrates that AI-driven algorithms powered by Large Language Models finally provide platforms with technical means to detect polarizing content affecting users' democratic attitudes
3
. "What's exciting about these results is that there is something that the platforms can do to reduce political polarization," said Martin Saveski, assistant professor at the University of Washington information school . The change demonstrates the power of the algorithm to significantly influence users' feelings of polarization and boost societal division or harmony4
.The tool could enable interventions that not only mitigate partisan animosity but also promote greater social trust and healthier democratic discourse across party lines
2
5
. Researchers have made the code available, opening possibilities for user control over their own social media algorithms2
. This aligns with growing interest in giving users more control to decide what principles should guide their feeds, with platforms including Bluesky and X moving in this direction3
.The team plans to investigate long-term effects of these interventions and test new ranking objectives to address other risks to online well-being, including mental health and life satisfaction
3
. Future work will explore how to balance multiple goals such as cultural context, personal values, and user control to create online spaces that better support healthy social and civic interaction3
. The study noted limitations, as the tool was only accessible to those logged into X on a browser rather than an app, and did not measure long-term impacts of seeing less polarizing content5
. The research represents a first step toward designing algorithms aware of their potential social impact, with many questions remaining about how platforms might implement such changes without platform collaboration1
3
.Summarized by
Navi
[1]
[3]
13 Aug 2025•Science and Research

08 Jul 2025•Technology

09 Oct 2024•Technology

1
Technology

2
Technology

3
Technology
