4 Sources
4 Sources
[1]
Reranking partisan animosity in algorithmic social media feeds alters affective polarization
Social media algorithms profoundly impact our lives: They curate what we see (1) in ways that can shape our opinions (2-4), moods (5-7), and actions (8-12). Because of the power that these ranking algorithms have to direct our attention, the research literature has articulated many theories and results detailing the impact that ranking algorithms have on us (13-17). However, validating these theories and results has remained extremely difficult because the ranking algorithm behavior is determined by the social media platforms, and only the platforms themselves can test alternative feed designs and causally assess their impact. Platforms, however, face political and financial pressures that constrain the kinds of experiments they can launch and share (18). Concerns about lawsuits and the need to preserve engagement-driven revenue further limit what platforms are willing to test, leaving massive gaps in the design space of ranking algorithms that have been explored in naturalistic settings and at scale. To address this gap, we present an approach that enables researchers to rerank participants' social media feeds in real time as they browse, without requiring platform permission or cooperation. We built a browser extension, a small add-on to a web browser that modifies how web pages appear or behave, similar to an ad blocker. Our extension intercepts and modifies X's web-based feed in real time and reranks the feed using large language model (LLM)-based rescoring, with only a negligible increase in page load time. This web extension allows us to rerank content according to experimentally controlled conditions. The design opens a new paradigm for algorithmic experimentation: It provides external researchers with a tool for conducting independent field experiments and evaluating the causal effects of algorithmic content curation on user attitudes and behaviors while preserving ecological validity. This capability allowed us to investigate a pressing question: Can feed algorithms cause affective polarization, i.e., hostility toward opposing political parties (19-22)? This concern has grown since the 2016 US presidential election (23), and the debate remains ongoing after the 2020 and 2024 elections. If social media algorithms are causing affective polarization, they might not only bear responsibility for rising political incivility online (24), they might also pose a risk to trust in democratic institutions (25). In this case, isolating the algorithmic design choices that cause polarization could offer alternative algorithmic approaches (26). A major hypothesized mechanism for how feed algorithms cause polarization is a self-reinforcing engagement loop: users engage with content aligning with their political views, the feed algorithm interprets this engagement as a positive signal, and the algorithm exposes even more politically aligned content to users, leading to a polarizing cycle. Some studies support this hypothesis, finding that online interactions exacerbate polarization (27), potentially because of the increased visibility of hostile political discussions (28), divisive language (29-33), and content that reinforces existing beliefs (34). However, large-scale field experiments aimed at reducing polarization by intervening on the feed algorithm -- for example, by increasing exposure to out-party content -- have found both a decrease (35) and an increase (36) in polarization. Similarly, recent large-scale experiments on Facebook and Instagram found no evidence that reduced exposure to in-party sources or a simpler reverse-chronological algorithm affected polarization and political attitudes (23, 37) during the 2020 US election. These mixed results reveal the difficulty in identifying what, if any, algorithmic intervention might help reduce polarization, especially during politically charged times. We distilled the goals of these prior interventions to a direct hypothesis that we could operationalize through real-time LLM reranking: that feed algorithms cause affective polarization by exposing users specifically to content that polarizes. An algorithm that up-ranks content reflecting genuine political dialogue is less likely to polarize than one that up-ranks demagoguery. This content-focused hypothesis has been difficult to operationalize into interventions, making studies that intervene on cross-partisan exposure and reverse-chronological ranking attractive but more diffuse in their impact and thus more likely to observe mixed results. However, by connecting our real-time reranking infrastructure with recent advances in LLMs, we could create a ranking intervention that more directly targets the focal hypothesis (38) without needing platform collaboration. We drew, in particular, on a recent large-scale field experiment that articulated eight categories of antidemocratic attitudes and partisan animosity as bipartisan threats to the healthy functioning of democracy (39). We operationalized these eight categories into an artificial intelligence (AI) classifier that labels expressions of these constructs in social media posts, does so with accuracy comparable to trained annotators, and produces depolarization effects in a lab setting on a fixed feed (40). This real-time classification enabled us to perform a scalable, content-based reranking experiment on participants' own feeds in the field (41). We conducted a preregistered field experiment on X, the most used social media platform for political discourse in the US (42), using our extension to dynamically rerank participants' social media content by either increasing or decreasing exposure to content that expresses these eight factors of antidemocratic attitudes and partisan animosity (AAPA) over the course of a week. The experiment was conducted during a pivotal moment in the 2024 US election cycle, from July to August 2024, an important period for understanding how social media feeds impact affective polarization. Major political events during the study period included the attempted assassination of Donald Trump, the withdrawal of Joe Biden from the 2024 presidential race, and the nomination of Kamala Harris as the Democratic Party's candidate. These events allow us to examine the impact of heterogeneous AAPA content on partisan polarization and hostility. We measured the intervention's effect on affective polarization (43) and emotional experience (44). Compared with control conditions that did not rerank the feed, decreased AAPA exposure led to warmer feelings toward the political outgroup, whereas increased AAPA exposure led to colder feelings. These changes also affected participants' levels of negative emotions (anger and sadness) as measured through in-feed surveys.
[2]
Social media research tool lowers the political temperature
This development opens a path for researchers and individual users to have more control over what they see on social media. A new tool shows it is possible to turn down the partisan rancor in an X feed - without removing political posts and without the direct cooperation of the platform. The Stanford-led research, published in Science, also indicates that it may one day be possible to let users take control of their own social media algorithms. A multidisciplinary team created a seamless, web-based tool that reorders content to move posts lower in a user's feed when they contain antidemocratic attitudes and partisan animosity, such as advocating for violence or jailing supporters of the opposing party. In an experiment using the tool with about 1,200 participants over 10 days during the 2024 election, those who had antidemocratic content downranked showed more positive views of the opposing party. The effect was also bipartisan, holding true for people who identified as liberals or conservatives. "Social media algorithms directly impact our lives, but until now, only the platforms had the ability to understand and shape them," said Michael Bernstein, a professor of computer science in Stanford's School of Engineering and the study's senior author. "We have demonstrated an approach that lets researchers and end users have that power." The tool could also open ways to create interventions that not only mitigate partisan animosity, but also promote greater social trust and healthier democratic discourse across party lines, added Bernstein, who is also a senior fellow at the Stanford Institute for Human-Centered Artificial Intelligence. For this study, the team drew from previous sociology research from Stanford, identifying categories of antidemocratic attitudes and partisan animosity that can be threats to democracy. In addition to advocating for extreme measures against the opposing party, these attitudes include statements that show rejection of any bipartisan cooperation, skepticism of facts that favor the other party's views, and a willingness to forgo democratic principles to help the favored party. Preventing emotional hijacking There is often an immediate, unavoidable emotional response to seeing this kind of content, said study co-author Jeanne Tsai. "This polarizing content can just hijack their attention by making people feel bad the moment they see it," said Tsai, a professor of psychology in the Stanford School of Humanities and Sciences. The study brought together researchers from University of Washington and Northeastern, as well as Stanford, to tackle the problem from a range of disciplines, including computer science, psychology, information science, and communication. The study's first author, Tiziano Piccardi, a former postdoctoral fellow in Bernstein's lab, created a web extension tool coupled with an artificial intelligence large language model that scans posts for these types of antidemocratic and extreme negative partisan sentiments. The tool then re-orders posts on the user's X feed in a matter of seconds. Then, in separate experiments, the researchers had a group of participants, who consented to have their feeds modified, view X with this type of content downranked or upranked over 10 days, and compared their reactions to a control group. No posts were removed, but the more incendiary political posts appeared lower or higher in their content streams. The impact on polarization was clear, said Piccardi, who is now an assistant professor of computer science at Johns Hopkins University. "When the participants were exposed to less of this content, they felt warmer toward the people of the opposing party," he said. "When they were exposed to more, they felt colder." Small change with a potentially big impact Before and after the experiment, the researchers surveyed participants on their feelings toward the opposing party on a scale of 1 to 100. Among the participants who had the negative content downranked, their attitudes improved on average by two points - equivalent to the estimated change in attitudes that has occurred among the general U.S. population over a period of three years. Previous studies on social media interventions to mitigate this kind of polarization have shown mixed results. Those interventions have also been rather blunt instruments, the researchers said, such as ranking posts chronologically or stopping social media use altogether. This study shows that a more nuanced approach is possible and effective, Piccardi said. It can also give people more control over what they see, and that might help improve their social media experience overall since downranking this content not only decreased participants' polarization but also their feelings of anger and sadness. The researchers are now looking into other interventions using a similar method, including ones that aim to improve mental health. The team has also made the code of the current tool available, so other researchers and developers can use it to create their own ranking systems independent of a social media platform's algorithm.
[3]
Partisan X posts boost political polarisation among users, research finds
Study shows small changes to tone of posts in 'for you' feed increase unfavourable feelings towards political opponents Small changes to the tone of posts fed to users of X can increase feelings of political polarisation as much in a week as would have historically taken at least three years, research has found. A groundbreaking experiment to gauge the potency of Elon Musk's social platform to increase political division found that when posts expressing anti-democratic attitudes and partisan animosity were boosted, even barely perceptibly, in the feeds of Democrat and Republican supporters there was a large change in their unfavourable feelings towards the other side. The degree of increased division - known as "affective polarisation" - achieved in one week by the changes the academics made to X users' feeds was as great as would have on average taken three years between 1978 and 2020. Most of the more than 1,000 users who took part in the experiment during the 2024 US presidential election did not notice that the tone of their feed had been changed. The campaign was marked by divisive viral posts on X, including a fake image of Kamala Harris cosying up to Jeffrey Epstein at a gala and an AI-generated image posted by Musk of Kamala Harris dressed as a communist dictator that had 84m views. Repeated exposure to posts expressing antidemocratic attitudes and partisan animosity "significantly influences" users' feelings of polarisation and boosts sadness and anger, they found. Musk bought Twitter in 2022, rebranded it X and introduced the "for you" feed, which instead of only showing posts relating to accounts users actively follow, uprates content calculated to maximise engagement. The extent to which more antidemocratic posts make users feel greater animosity towards political opponents "demonstrates the power of the algorithm", said Martin Saveski, assistant professor at the University of Washington information school, who, with colleagues at the universities of Stanford, Johns Hopkins, and Northeastern, produced the study published in the journal Science. "The change in their feed was barely perceptible, yet they reported a significant difference in how they felt about other people," added Tiziano Piccardi, assistant professor at the Johns Hopkins University computer science department and co-author of the research. "Based on US trends, that shift corresponds to roughly three years of polarisation." The study also found that relatively subtle changes to the content of users' feeds can significantly reduce political animosity among Republicans and Democrats, suggesting X had the power to increase political harmony if Musk chose to use it in that way. "What's exciting about these results is that there is something that the platforms can do to reduce polarisation," said Saveski. "It's a new approach they could take in designing their algorithms." X was approached for comment. Eight in 10 American adults say that not only can Republicans and Democrats not agree on policies and plans, but they cannot agree on basic facts, according to Pew research. More than half of people in the UK believe the differences in people's political views are so divisive it is dangerous for society, recent polling by Ipsos found. The changes in political polarisation resulting from exposure to X posts were measured using a novel approach. First, the academics used AI to analyse posts in X's "for you" feed in real time. Then the system showed more divisive posts to one cohort and fewer divisive posts to another, a power normally the sole preserve of X. Divisive posts included those that showed support for undemocratic practices, partisan violence, opposition to bipartisan consensus and biased evaluations of politicised facts. After a week of reading these subtly altered feeds, the researchers asked users to rank how warm or cold, favourable or unfavourable they felt towards their political opponents. The changes in "affective polarisation" were ranked at more than two degrees on a 0 to 100 degree "feeling thermometer". This was the same amount of increased polarisation that typically occurred in the US in the four decades to 2020. Feeding users fewer posts with antidemocratic attitudes and partisan animosity decreased political division by a similar amount. Social media platforms have long been accused of encouraging divisive content to boost user engagement and therefore advertising revenues. But the research found that while there was a slight reduction in overall engagement in terms of time spent on the platform and numbers of posts viewed when divisive content was down-ranked, those users tended to "like" or repost more often. "The success of this method shows that it can be integrated into social media AI to mitigate harmful personal and societal consequences," the authors wrote. "At the same time, our engagement analyses indicate a practical trade-off: interventions that down-rank [antidemocratic and partisan content] may reduce short-term engagement volume, posing challenges for engagement-driven business models and supporting the hypothesis that content that provokes strong reactions generates more engagement."
[4]
Social media algorithms can alter political views, study says
A browser extension powered by AI can reduce how people feel about opposing political views, according to new research that looked into the 2024 US presidential election. Researchers in the United States have developed a new tool that allows independent scientists to study how social media algorithms affect users -- without needing permission from the platforms themselves. The findings suggest that platforms could reduce political polarisation by down-ranking hostile content in their algorithms. The tool, a browser extension powered by artificial intelligence (AI), scans posts on X, formerly Twitter, for any themes of anti-democratic and extremely negative partisan views, such as posts that could call for violence or jailing supporters of an opposing party. It then re-orders posts on the X feed in a "matter of seconds," the study showed, so the polarising content was nearer to the bottom of a user's feed. The team of researchers from Stanford University, the University of Washington, and Northeastern University then tested the browser extension on the X feeds of over 1,200 participants who consented to having them modified for 10 days in the lead-up to the 2024 US presidential election. Some of the participants used the browser extension that showed more divisive content, and the rest used the one that demoted it to a lower position on the feed. The results were published in the journal Science on Thursday. The researchers asked participants to rate their feelings about the opposing political party on a scale of 1 to 100 during the experiment. For the participants whoused the browser tool, their attitudes towards the opposing party improved on average by two points, which is the estimated change in attitude from the American public in three years. "These changes were comparable in size to 3 years of change in United States affective polarisation," the researchers noted. The results were bipartisan, meaning the effects were consistent across party lines for people with liberal and conservative views. Tiziano Piccardi, assistant professor of computer science at Johns Hopkins University, said the tool has a "clear" impact on polarisation. "When the participants were exposed to less of this content, they felt warmer toward the people of the opposing party,"he said in a statement."When they were exposed to more, they felt colder". The researchers note that this could be a new way of reranking social media accounts "without platform collaboration". "These interventions may result in algorithms that not only reduce partisan animosity but also promote greater social trust and healthier democratic discourse across party lines," the study concluded. The study also looked into emotional responses and found that participants who reduced hostile content reported feeling less angry and sad while using the platform. But the emotional effects didn't continue after the study ended. The researchers wrote that their study was only accessible to those logged in to X on a browser, not an app, which could limit the effects. Their study also did not measure the long-term impact that seeing less polarising content could have on X users.
Share
Share
Copy Link
Stanford-led research team creates a browser extension using AI to rerank social media feeds, successfully reducing political polarization by downranking hostile content during the 2024 election period.
A multidisciplinary team led by Stanford University has developed a groundbreaking browser extension that allows researchers to modify social media feeds in real-time without requiring platform cooperation
1
. The tool, published in Science journal, represents a significant advancement in social media research methodology by enabling independent field experiments on algorithmic content curation2
.The browser extension works by intercepting and modifying X's web-based feed using large language model (LLM)-based rescoring, with only negligible increases in page load time
1
. This capability addresses a critical gap in social media research, where platforms' political and financial pressures have historically limited the kinds of experiments that could be conducted at scale.
Source: Euronews
The research team, including computer scientists, psychologists, and communication experts from Stanford, University of Washington, and Northeastern University, created an AI system that scans posts for antidemocratic attitudes and partisan animosity
2
. The tool identifies eight categories of content that pose bipartisan threats to democratic functioning, including advocacy for violence, rejection of bipartisan cooperation, and willingness to forgo democratic principles4
.First author Tiziano Piccardi, now an assistant professor at Johns Hopkins University, developed the extension to reorder posts within seconds, moving problematic content lower in users' feeds without removing it entirely
2
. This approach represents a more nuanced intervention compared to previous studies that used blunt instruments like chronological ranking or complete social media cessation.The experimental results from over 1,200 participants during the 2024 US presidential election demonstrated remarkable effectiveness in reducing affective polarization
3
. Participants who had antidemocratic content downranked showed improved attitudes toward opposing parties by an average of two points on a 100-point scale, equivalent to three years of natural polarization change in the United States4
.The effects were bipartisan, holding true for both liberal and conservative participants, and most users did not notice their feeds had been modified
3
. Professor Jeanne Tsai from Stanford's Psychology Department explained that polarizing content can "hijack attention by making people feel bad the moment they see it," emphasizing the immediate emotional impact of such material2
.Related Stories
The research demonstrates that social media platforms possess significant power to influence political discourse and could implement similar interventions to reduce polarization if they chose to do so
3
. Senior author Michael Bernstein, a Stanford computer science professor, noted that while social media algorithms directly impact lives, until now only platforms had the ability to understand and shape them2
.The study found that while downranking divisive content slightly reduced overall engagement in terms of time spent and posts viewed, users actually engaged more meaningfully through likes and reposts
3
. This suggests that reducing harmful content doesn't necessarily damage platform engagement metrics, challenging assumptions about the business necessity of promoting divisive content.The researchers have made their code publicly available and are exploring additional interventions using similar methods, including applications for mental health improvement
2
. This opens possibilities for both researchers and individual users to have greater control over their social media experiences and algorithmic exposure.Summarized by
Navi
[1]
13 Aug 2025β’Science and Research

08 Jul 2025β’Technology

09 Oct 2024β’Technology

1
Business and Economy

2
Technology

3
Technology
