3 Sources
[1]
Don't blame the algorithm: Polarization may be inherent in social media
Following the 2024 U.S. presidential election, millions of aggravated X users flocked to Bluesky to avoid the partisan vitriol that had overtaken the older social media platform. Designed without an algorithm to determine what content users see, Bluesky aimed to avoid X's pitfalls. For a while, it seemed to work: Individuals with lots of followers weren't overamplified, nasty language was less common, and misinformation seemed held at bay. But less than 1 year later, some of social media's typical ills had emerged on Bluesky. Some complain it has become a bit of an echo chamber, albeit a left-leaning one. Now, simulations with a scaled-down platform populated with virtual users generated with artificial intelligence (AI) may have revealed why social media tends to become so polarized. The simple platform had no nuanced algorithm designed to feed users posts that would appeal to them and keep them online the longest. Yet it still split into insular communities, researchers report in a preprint posted to the arXiv server last week. The results suggest that just the basic functions of social media -- posting, reposting, and following -- inevitably produce polarization. Others caution, however, that cliquishness may have been baked into the AI-generated users. The study's "central outcomes are compelling," says Kate Starbird, an information scientist at the University of Washington who studies online rumors and was not involved with the work. "There's a lot of things about it that resonate with hypotheses that I and others have had about online systems." Experiments studying social media with real participants can be expensive and ethically tricky, and they require reluctant social media companies to cooperate. So, Maik Larooij and Petter Törnberg, computational social scientists at the University of Amsterdam, turned to "generative social simulation," a technique that uses chatbot-style AI programs called large language models (LLMs) to stand in for human subjects. They aimed to strip social media down to its bare bones, then build it back up to determine the root of three negative phenomena: the emergence of partisan echo chambers (like following like), the concentration of influence among a few posters (the rich getting richer), and the amplification of extreme voices (the so-called social media prism). The network consisted of 500 virtual users, each of whom was assigned characteristics such as age, gender, religion, political leaning, and education, based on real personas from national surveys of voters called the American National Election Studies. In three different trials, the researchers used each of three popular LLMs -- ChatGPT, Llama, and DeepSeek -- to expand the users into more nuanced personas with hobbies and occupations, and then to make decisions based on those profiles. In the experiments, a randomly selected user would face three choices: Choose a news article from 10 random options (out of 210,000) and write a post about it, repost something, or follow another user based on their profile. The user's choices were influenced by their feed, which consisted of 10 posts. Half were from a user's followers and half were popular posts from people the user didn't follow. The network ran for 10,000 cycles in each test. But no matter which LLM the researchers used, the platform inevitably developed the negative trifecta of echo chambers, concentrated influence, and extreme voices. "We were expecting that we would have to work very hard in some way to produce this effect," Törnberg says. But instead, "We get this toxic network that is forming as a result of these just basic actions of reposting and following." To attempt to fix the toxicity, Törnberg and Larooij tested six simple interventions, including displaying posts solely chronologically, instead of based on engagement. They also tested what amounted to antialgorithms, routines that showed a user the least engaged posts instead of the most engaged ones, or that showed the user posts expressing political views opposite their own. None of the methods worked completely, and some actually increased the nastiness. "I was a bit disappointed, to be honest," Törnberg says. "Because this was supposed to be the optimistic paper." Starbird sees the model bringing out "a resonance between human nature and the attention dynamics of social media." Even if you take out the algorithm as on BlueSky, she says, "the system is still wired in such a way that some of those toxicities remain." Filippo Menczer, a computer scientist at the University of Indiana, isn't convinced. AI is a "very huge black box," he says. The LLMs are trained on online human behaviors already reflective of social media toxicity, so the model may be "hard-coded" to devolve into polarization. Even more subtle, he says, problematic algorithms already shape the real online behavior used to train the LLMs, so it's conceivable that they indirectly influenced the outcome of the virtual experiment, making it impossible to rule out their importance. Jennifer Allen, a computational social scientist at New York University, says the caveats can't be ignored but that "there's a lot of potential with what they found." Though there may never be "easy, straightforward solutions" to fixing social media, she thinks individual users could try tamping down polarization by posting more neutral, bipartisan content. Understanding how online polarization emerges is vital, Starbird emphasizes. "I don't think you can understand politics in this moment without understanding what's happening on social media."
[2]
Social media toxicity can't be fixed by changing the algorithms
Experiments involving AI chatbots interacting on a simulated social media platform suggest efforts to design out antagonistic user behaviour will not succeed The polarising impact of social media isn't just the result of bad algorithms - it is inevitable because of the core components of how the platforms work, a study with AI-generated users has found. It suggests the problem won't be fixed unless we fundamentally reimagine the world of online communication. Petter Törnberg at the University of Amsterdam in the Netherlands and his colleagues set up 500 AI chatbots designed to mimic a range of political beliefs in the US, based on the American National Election Studies Survey. Those bots, powered by the GPT-4o mini large language model, were then instructed to interact with one another on a simple social network the researchers had designed with no ads or algorithms. During five runs of the experiment, each involving 10,000 actions, the AI agents tended to follow people with whom they shared political affiliations, while those with more partisan views gained more followers and reposts. This echoed overall attention towards those users, which gravitated towards more partisan posters. In a previous study, Törnberg and his colleagues explored whether simulated social networks with different algorithms could identify routes to tamp down political polarisation - but the new research seems to contradict their earlier findings. "We were expecting this [polarisation] to be something that's driven by algorithms," Törnberg says. "[We thought] that the platforms are designed for this - to produce these outcomes - because they are designed to maximise engagement and to piss you off and so on." Instead, they found it wasn't the algorithms themselves that seemed to be causing the issue, which could make any attempts to weed out antagonistic user behaviour by design very difficult. "We set up the simplest platform we could imagine, and then, boom, we already have these outcomes," he says. "That already suggests that this is stemming from something very fundamental to the fact that we have posting behaviour, reposting and following." To see whether those behaviours could be either muted or countered, the researchers also tested six potential solutions, including a solely chronological feed, giving less prominence to viral content, amplifying opposing views and empathetic and reasoned content, hiding follower and repost counts, and hiding profile bios. Most of the interventions made little difference: cross-party mixing changed by no more than about 6 per cent, and the share of attention hogged by top accounts shifted between 2 and 6 per cent - while others, such as hiding biographies of the users involved, actually made the problem worse. When there were gains in one area, they were countered by negative impacts elsewhere. Fixes that reduced user inequality made extreme posts more popular, while alterations to soften partisanship funnelled even more attention to a small elite. "Most social media activities are always fruit of the poisonous tree - the beginning problems of social media always lie with their foundational design, and as such can encourage the worst of human behaviour," says Jess Maddox at the University of Georgia. While Törnberg acknowledges the experiment is a simulation that could simplify some mechanisms, he thinks it can tell us what social platforms need to do to reduce polarisation. "We might need more fundamental interventions and need more fundamental rethinking," he says. "It might not be enough to wiggle with algorithms and change the parameters of the platform, but [we might] need to rethink more fundamentally the structure of interaction and how these spaces structure our politics."
[3]
Researchers Made a Social Media Platform Where Every User Was AI. The Bots Ended Up at War
Social platforms like Facebook and X exacerbate the problem of political and social polarization, but they don't create it. A recent study conducted by researchers at the University of Amsterdam in the Netherlands put AI chatbots in a simple social media structure to see how they interacted with each other and found that, even without the invisible hand of the algorithm, they tend to organize themselves based on their pre-assigned affiliations and self-sort into echo chambers. The study, a preprint of which was recently published on arXiv, took 500 AI chatbots powered by OpenAI's large language model GPT-4o mini, and prescribed to them specific personas. Then, they were unleashed onto a simple social media platform that had no ads and no algorithms offering content discovery or recommended posts served into a user's feed. Those chatbots were tasked with interacting with each other and the content available on the platform. Over the course of five different experiments, all of which involved the chatbots engaging in 10,000 actions, the bots tended to follow other users who shared their own political beliefs. It also found that users who posted the most partisan content tended to get the most followers and reposts. The findings don't exactly speak well of us, considering the chatbots were intended to replicate how humans interact. Of course, none of this is truly independent from the influence of the algorithm. The bots have been trained on human interaction that has been defined by decades now by how we behave online in an algorithm-dominated world. They are emulating the already poison-brained versions of ourselves, and it's not clear how we come back from that. To combat the self-selecting polarization, the researchers tried a handful of solutions, including offering a chronological feed, devaluing viral content, hiding follower and repost figures, hiding user profiles, and amplifying opposing views. (That last one, the researchers had success with in a previous study, which managed to create high engagement and low toxicity in a simulated social platform.) None of the interventions really made a difference, failing to create more than a 6% shift in the engagement given to partisan accounts. In the simulation that hid user bios, the partisan divide actually got worse, and extreme posts got even more attention. It seems social media as a structure may simply be untenable for humans to navigate without reinforcing our worst instincts and behaviors. Social media is a fun house mirror for humanity; it reflects us, but in the most distorted of ways. It's not clear there are strong enough lenses to correct how we see each other online.
Share
Copy Link
A new study using AI-generated users on a simulated social media platform suggests that polarization and toxicity are inherent to social media's basic functions, rather than solely the result of algorithms.
A groundbreaking study conducted by researchers at the University of Amsterdam has shed new light on the persistent issue of polarization in social media. Using artificial intelligence to simulate user behavior, the study suggests that the toxicity often associated with social platforms may be inherent to their basic structure, rather than solely the result of complex algorithms 1.
Source: Science
The research team, led by Maik Larooij and Petter Törnberg, employed a novel approach called "generative social simulation." They created a scaled-down social media platform populated with 500 virtual users generated by large language models (LLMs) such as ChatGPT, Llama, and DeepSeek 1.
These AI-powered users were assigned characteristics based on real personas from American National Election Studies surveys, including age, gender, religion, political leaning, and education. The simulated platform was stripped of complex algorithms, ads, and content recommendation systems, focusing solely on basic functions like posting, reposting, and following 2.
The results of the study were striking. Despite the absence of sophisticated algorithms designed to maximize engagement, the simulated platform inevitably developed three key negative phenomena:
These outcomes emerged consistently across multiple trials using different LLMs, suggesting that the basic actions of social media interaction are sufficient to produce a toxic online environment 1.
The researchers tested six interventions to mitigate the observed toxicity:
Surprisingly, none of these methods proved entirely effective. Some interventions even exacerbated the issues, with changes in cross-party mixing limited to about 6% and shifts in attention to top accounts ranging between 2% and 6% 2.
The study's findings have significant implications for our understanding of social media dynamics. Kate Starbird, an information scientist at the University of Washington, noted that the results align with existing hypotheses about online systems and highlight the resonance between human nature and social media attention dynamics 1.
However, some experts, like Filippo Menczer from Indiana University, caution against overinterpreting the results. They point out that the AI models used in the study may have been influenced by existing online behaviors, potentially carrying inherent biases 1.
Source: Gizmodo
The study's outcomes suggest that addressing social media toxicity may require more than tweaking algorithms or implementing surface-level changes. Törnberg emphasizes the need for a fundamental reimagining of online communication structures 2.
As social media continues to play a crucial role in shaping public discourse and political landscapes, understanding and addressing these inherent tendencies towards polarization becomes increasingly vital for the health of our digital and democratic ecosystems 3.
Source: New Scientist
As nations compete for dominance in space, the risk of satellite hijacking and space-based weapons escalates, transforming outer space into a potential battlefield with far-reaching consequences for global security and economy.
7 Sources
Technology
14 hrs ago
7 Sources
Technology
14 hrs ago
Anthropic has updated its Claude Opus 4 and 4.1 AI models with the ability to terminate conversations in extreme cases of persistent harm or abuse, as part of its AI welfare research.
6 Sources
Technology
22 hrs ago
6 Sources
Technology
22 hrs ago
A pro-Russian propaganda group, Storm-1679, is using AI-generated content and impersonating legitimate news outlets to spread disinformation, raising concerns about the growing threat of AI-powered fake news.
2 Sources
Technology
14 hrs ago
2 Sources
Technology
14 hrs ago
OpenAI has made subtle changes to GPT-5's personality, aiming to make it more approachable after users complained about its formal tone. The company is also working on allowing greater customization of ChatGPT's style.
4 Sources
Technology
6 hrs ago
4 Sources
Technology
6 hrs ago
SoftBank has purchased Foxconn's Ohio plant for $375 million to produce AI servers for the Stargate project. Foxconn will continue to operate the facility, which will be retrofitted for AI server production.
5 Sources
Technology
6 hrs ago
5 Sources
Technology
6 hrs ago