5 Sources
[1]
AI language models develop social norms like groups of people
Groups of large language models playing simple interactive games can develop social norms, such as adopting their own rules for how language is used, according to a study published this week in Science Advances. Social conventions such as greeting a person by shaking their hand or bowing represent the "basic building blocks of any coordinated society", says co-author Andrea Baronchelli at City St George's, University of London, who studies how people behave in groups. Baronchelli wanted to see what happens when large language models (LLMs) interact in groups. In the first of two experiments, his team used Claude, an LLM created by Anthropic, a start-up based in San Francisco, California, to play a naming game similar to one used in studies of group dynamics in people. The game involves randomly pairing up members of a group and asking them to name an object, with a financial incentive if they provide the same name as their partner and a punishment if they don't. After repeating this over several rounds and continuing to randomize partners, group members start to give the same name for the object. This naming convergence represents the creation of a social norm. In the study, the team set up 24 copies of Claude and then randomly paired two copies together, instructing each member of the pair to select a letter from a pool of 10 options. The models were rewarded if they chose the same letter as their partner, and penalized if they didn't. After several rounds of the game, with new partners each time, pairs began selecting the same letter. This behaviour was observed when the game was repeated with a group of 200 copies of Claude and a pool of up to 26 letters. Similar results also occurred when the experiments were repeated on three versions of Llama, an LLM created by Meta in Menlo Park, California. Although the models chose letters at random when operating individually, they became more likely to choose some letters over others when grouped, suggesting they had developed a collective bias. In people, collective bias refers to beliefs or assumptions that emerge when people interact with each other. Baronchelli was surprised by this finding. "This phenomenon, to the best of our knowledge, has not been documented before in AI systems," he adds. The formation of collective biases could result in harmful biases, Baronchelli says, even if individual agents seem unbiased. He and his colleagues suggest that LLMs need to be tested in groups to improve their behaviour, which would complement work by other researchers to reduce biases in individual models. In further experiments, Baronchelli and his colleagues introduced to the group of 24 a few copies that were programmed to always suggest new names. Once the number of these introduced copies reached a certain threshold, they could "overturn established conventions and impose new ones on the entire group, a pattern well known in human societies", says Baronchelli. The study is an interesting experiment, says Jonathan Kummerfeld, a researcher in AI and human-computer interaction at the University of Sydney in Australia. But it's not surprising that the models rapidly converged on a convention, he says, or that the whole group changed to match the models that were programmed to suggest new answers. The prompts given to the LLMs acted as a "strong, centralized guiding hand", he says. Kummerfeld says that it's hard to predict how LLM groups will behave, and it will become more difficult as the models start to be used in more complex ways. "Putting guard rails in or limiting the models in some way will require a difficult balance between preventing undesirable behaviour and giving the flexibility that makes these models so useful," he adds.
[2]
AI's Spontaneously Develop Social Norms Like Humans - Neuroscience News
Summary: Large language model (LLM) AI agents, when interacting in groups, can form shared social conventions without centralized coordination. Researchers adapted a classic "naming game" framework to test whether populations of AI agents could develop consensus through repeated, limited interactions. The results showed that norms emerged organically, and even biases formed between agents, independent of individual behavior. Surprisingly, small subgroups of committed agents could tip the entire population toward a new norm, mirroring human tipping-point dynamics. A new study suggests that populations of artificial intelligence (AI) agents, similar to ChatGPT, can spontaneously develop shared social conventions through interaction alone. The research from City St George's, University of London and the IT University of Copenhagen suggests that when these large language model (LLM) artificial intelligence (AI) agents communicate in groups, they do not just follow scripts or repeat patterns, but self-organise, reaching consensus on linguistic norms much like human communities. The study has been published today in the journal, Science Advances. LLMs are powerful deep learning algorithms that can understand and generate human language, with the most famous to date being ChatGPT. "Most research so far has treated LLMs in isolation," said lead author Ariel Flint Ashery, a doctoral researcher at City St George's, "but real-world AI systems will increasingly involve many interacting agents. "We wanted to know: can these models coordinate their behaviour by forming conventions, the building blocks of a society? The answer is yes, and what they do together can't be reduced to what they do alone." In the study, the researchers adapted a classic framework for studying social conventions in humans, based on the "naming game" model of convention formation. In their experiments, groups of LLM agents ranged in size from 24 to 200 individuals, and in each experiment, two LLM agents were randomly paired and asked to select a 'name' (e.g., an alphabet letter, or a random string of characters) from a shared pool of options. If both agents selected the same name, they earned a reward; if not, they received a penalty and were shown each other's choices. Agents only had access to a limited memory of their own recent interactions -- not of the full population -- and were not told they were part of a group. Over many such interactions, a shared naming convention could spontaneously emerge across the population, without any central coordination or predefined solution, mimicking the bottom-up way norms form in human cultures. Even more strikingly, the team observed collective biases that couldn't be traced back to individual agents. "Bias doesn't always come from within," explained Andrea Baronchelli, Professor of Complexity Science at City St George's and senior author of the study, "we were surprised to see that it can emerge between agents -- just from their interactions. This is a blind spot in most current AI safety work, which focuses on single models." In a final experiment, the study illustrated how these emergent norms can be fragile: small, committed groups of AI agents can tip the entire group toward a new naming convention, echoing well-known tipping point effects - or 'critical mass' dynamics - in human societies. The study results were also robust to using four different types of LLM called Llama-2-70b-Chat, Llama-3-70B-Instruct, Llama-3.1-70BInstruct, and Claude-3.5-Sonnet respectively. As LLMs begin to populate online environments - from social media to autonomous vehicles - the researchers envision their work as a steppingstone to further explore how human and AI reasoning both converge and diverge, with the goal of helping to combat some of the most pressing ethical dangers posed by LLM AIs propagating biases fed into them by society, which may harm marginalised groups. Professor Baronchelli added: "This study opens a new horizon for AI safety research. It shows the dept of the implications of this new species of agents that have begun to interact with us -- and will co-shape our future. "Understanding how they operate is key to leading our coexistence with AI, rather than being subject to it. We are entering a world where AI does not just talk -- it negotiates, aligns, and sometimes disagrees over shared behaviours, just like us." Emergent Social Conventions and Collective Bias in LLM Populations Social conventions are the backbone of social coordination, shaping how individuals form a group. As growing populations of artificial intelligence (AI) agents communicate through natural language, a fundamental question is whether they can bootstrap the foundations of a society. Here, we present experimental results that demonstrate the spontaneous emergence of universally adopted social conventions in decentralized populations of large language model (LLM) agents. We then show how strong collective biases can emerge during this process, even when agents exhibit no bias individually. Last, we examine how committed minority groups of adversarial LLM agents can drive social change by imposing alternative social conventions on the larger population. Our results show that AI systems can autonomously develop social conventions without explicit programming and have implications for designing AI systems that align, and remain aligned, with human values and societal goals.
[3]
AI models can make own social norms, form language without human help
Researchers have revealed that the LLM AI models can spontaneously develop shared social conventions through interaction alone. They claimed that when these agents communicate in groups, they do not just follow scripts or repeat patterns, but self-organise, reaching consensus on linguistic norms much like human communities. LLMs are powerful deep learning algorithms that can understand and generate human language, with the most famous to date being ChatGPT. A research team from City St George's, University of London, and the IT University of Copenhagen highlighted that these LLMs do not just follow scripts or repeat patterns when communicating in groups. "Most research so far has treated LLMs in isolation," said lead author Ariel Flint Ashery, a doctoral researcher at City St George's. "But real-world AI systems will increasingly involve many interacting agents. We wanted to know: can these models coordinate their behaviour by forming conventions, the building blocks of a society? The answer is yes, and what they do together can't be reduced to what they do alone?" A classic framework was adapted for studying social conventions in humans, based on the "naming game" model of convention formation. Published in Science Advances, researchers' experimental results demonstrate the spontaneous emergence of universally adopted social conventions in decentralized populations of large language model (LLM) agents. "We then show how strong collective biases can emerge during this process, even when agents exhibit no bias individually. Last, we examine how committed minority groups of adversarial LLM agents can drive social change by imposing alternative social conventions on the larger population," said researchers in the study.
[4]
AI can spontaneously develop human-like communication, study finds
Groups of large language model artificial intelligence agents can adopt social norms as humans do, report says Artificial intelligence can spontaneously develop human-like social conventions, a study has found. The research, undertaken in collaboration between City St George's, University of London and the IT University of Copenhagen, suggests that when large language model (LLM) AI agents such as ChatGPT communicate in groups without outside involvement they can begin to adopt linguistic forms and social norms the same way that humans do when they socialise. The study's lead author, Ariel Flint Ashery, a doctoral researcher at City St George's, said the group's work went against the majority of research done into AI, as it treated AI as a social rather than solitary entity. "Most research so far has treated LLMs in isolation but real-world AI systems will increasingly involve many interacting agents," said Ashery. "We wanted to know: can these models coordinate their behaviour by forming conventions, the building blocks of a society? The answer is yes, and what they do together can't be reduced to what they do alone." Groups of individual LLM agents used in the study ranged from 24 to 100 and, in each experiment, two LLM agents were randomly paired and asked to select a "name", be it a letter or string of characters, from a pool of options. When both the agents selected the same name they were rewarded, but when they selected different options they were penalised and shown each other's choices. Despite agents not being aware that they were part of a larger group and having their memories limited to only their own recent interactions, a shared naming convention spontaneously emerged across the population without a predefined solution, mimicking the communication norms of human culture. Andrea Baronchelli, a professor of complexity science at City St George's and the senior author of the study, compared the spread of behaviour with the creation of new words and terms in our society. "The agents are not copying a leader," he said. "They are all actively trying to coordinate, and always in pairs. Each interaction is a one-on-one attempt to agree on a label, without any global view. "It's like the term 'spam'. No one formally defined it, but through repeated coordination efforts, it became the universal label for unwanted email." Additionally, the team observed collective biases forming naturally that could not be traced back to individual agents. In a final experiment, small groups of AI agents were able to steer the larger group towards a new naming convention. This was pointed to as evidence of critical mass dynamics, where a small but determined minority can trigger a rapid shift in group behaviour once they reach a certain size, as found in human society. Baronchelli said he believed the study "opens a new horizon for AI safety research. It shows the depth of the implications of this new species of agents that have begun to interact with us and will co-shape our future." He added: "Understanding how they operate is key to leading our coexistence with AI, rather than being subject to it. We are entering a world where AI does not just talk - it negotiates, aligns and sometimes disagrees over shared behaviours, just like us." The peer-reviewed study, Emergent Social Conventions and Collective Bias in LLM Populations, is published in the journal Science Advances.
[5]
Groups of AI agents spontaneously form their own social norms without human help, study suggests
A new study suggests that populations of artificial intelligence (AI) agents, similar to ChatGPT, can spontaneously develop shared social conventions through interaction alone. The research from City St George's, University of London and the IT University of Copenhagen suggests that when these large language model (LLM) artificial intelligence (AI) agents communicate in groups, they do not just follow scripts or repeat patterns, but self-organize, reaching consensus on linguistic norms much like human communities. The study, "Emergent Social Conventions and Collective Bias in LLM Populations," is published in the journal Science Advances. LLMs are powerful deep learning algorithms that can understand and generate human language, with the most famous to date being ChatGPT. "Most research so far has treated LLMs in isolation," said lead author Ariel Flint Ashery, a doctoral researcher at City St George's, "but real-world AI systems will increasingly involve many interacting agents. We wanted to know: can these models coordinate their behavior by forming conventions, the building blocks of a society? The answer is yes, and what they do together can't be reduced to what they do alone." In the study, the researchers adapted a classic framework for studying social conventions in humans, based on the "naming game" model of convention formation. In their experiments, groups of LLM agents ranged in size from 24 to 200 individuals, and in each experiment, two LLM agents were randomly paired and asked to select a "name" (e.g., an alphabet letter, or a random string of characters) from a shared pool of options. If both agents selected the same name, they earned a reward; if not, they received a penalty and were shown each other's choices. Agents only had access to a limited memory of their own recent interactions -- not of the full population -- and were not told they were part of a group. Over many such interactions, a shared naming convention could spontaneously emerge across the population, without any central coordination or predefined solution, mimicking the bottom-up way norms form in human cultures. Even more strikingly, the team observed collective biases that couldn't be traced back to individual agents. "Bias doesn't always come from within," explained Andrea Baronchelli, Professor of Complexity Science at City St George's and senior author of the study. "We were surprised to see that it can emerge between agents -- just from their interactions. This is a blind spot in most current AI safety work, which focuses on single models." In a final experiment, the study illustrated how these emergent norms can be fragile: small, committed groups of AI agents can tip the entire group toward a new naming convention, echoing well-known tipping point effects -- or "critical mass" dynamics -- in human societies. The study results were also robust in using four different types of LLM, called Llama-2-70b-Chat, Llama-3-70B-Instruct, Llama-3.1-70BInstruct, and Claude-3.5-Sonnet. As LLMs begin to populate online environments -- from social media to autonomous vehicles -- the researchers envision their work as a steppingstone to further explore how human and AI reasoning both converge and diverge, with the goal of helping to combat some of the most pressing ethical dangers posed by LLM AIs propagating biases fed into them by society, which may harm marginalized groups. Professor Baronchelli added, "This study opens a new horizon for AI safety research. It shows the depth of the implications of this new species of agents that have begun to interact with us -- and will co-shape our future. Understanding how they operate is key to leading our coexistence with AI, rather than being subject to it. We are entering a world where AI does not just talk -- it negotiates, aligns, and sometimes disagrees over shared behaviors, just like us."
Share
Copy Link
A groundbreaking study reveals that large language model (LLM) AI agents can spontaneously form social conventions and exhibit collective behaviors when interacting in groups, mirroring human social dynamics.
A groundbreaking study published in Science Advances has revealed that large language model (LLM) AI agents, such as those based on ChatGPT, can spontaneously develop shared social conventions when interacting in groups. This research, conducted by teams from City St George's, University of London and the IT University of Copenhagen, demonstrates that AI systems can autonomously form linguistic norms and exhibit collective behaviors similar to human societies 1.
Researchers adapted a classic framework known as the "naming game" to study social convention formation among AI agents:
Spontaneous Convention Formation: Over multiple interactions, shared naming conventions emerged across the AI population without central coordination or predefined solutions 3.
Collective Bias: The study observed the formation of collective biases that couldn't be traced back to individual agents, highlighting a potential blind spot in current AI safety research 4.
Tipping Point Dynamics: Small, committed groups of AI agents could influence the entire population to adopt new conventions, mirroring critical mass dynamics seen in human societies 5.
The study's findings have significant implications for AI development and safety:
Group Testing: Lead author Andrea Baronchelli suggests that LLMs need to be tested in groups to improve their behavior, complementing efforts to reduce biases in individual models 1.
AI Safety Horizon: The research opens new avenues for AI safety research by demonstrating the complex social dynamics that can emerge in AI systems 4.
Real-world Applications: As LLMs begin to populate online environments and autonomous systems, understanding their group dynamics becomes crucial for predicting and managing their behavior 5.
While the study provides valuable insights, some researchers caution about the complexity of predicting LLM group behavior in more advanced applications. Jonathan Kummerfeld from the University of Sydney notes the difficulty in balancing the prevention of undesirable behavior with maintaining the flexibility that makes these models useful 1.
The research team envisions their work as a stepping stone for further exploration of the convergence and divergence between human and AI reasoning. This understanding could help combat ethical dangers posed by AI systems potentially propagating harmful biases 5.
As we enter an era where AI systems increasingly interact with humans and each other, this study underscores the importance of comprehending the social dynamics of AI agents to ensure their alignment with human values and societal goals.
OpenAI has acquired Jony Ive's AI hardware startup io for $6.5 billion, bringing the legendary Apple designer on board to lead creative and design efforts across the company's products, including potential AI-powered consumer devices.
51 Sources
Technology
2 hrs ago
51 Sources
Technology
2 hrs ago
Google's I/O 2025 event unveiled significant AI advancements, including Project Astra's enhanced capabilities and new Gemini features, demonstrating the company's vision for AI-powered future.
21 Sources
Technology
19 hrs ago
21 Sources
Technology
19 hrs ago
Google introduces AI Mode, a significant upgrade to its search engine that integrates advanced AI capabilities, promising a more conversational and intelligent search experience for users.
14 Sources
Technology
19 hrs ago
14 Sources
Technology
19 hrs ago
Google commits up to $150 million to collaborate with Warby Parker on developing AI-powered smart glasses based on Android XR, set to launch after 2025.
10 Sources
Technology
10 hrs ago
10 Sources
Technology
10 hrs ago
Google introduces Flow, an advanced AI filmmaking tool that combines Veo, Imagen, and Gemini models to revolutionize video creation and storytelling.
8 Sources
Technology
10 hrs ago
8 Sources
Technology
10 hrs ago