Curated by THEOUTPOST
On Fri, 18 Oct, 12:06 AM UTC
7 Sources
[1]
AI Shows Promise in Bridging Business Divides, Experts Say | PYMNTS.com
AI software that crafts consensus statements from opposing viewpoints could one day be a tool for smoothing corporate negotiations and stakeholder disputes. A new breed of artificial intelligence system can analyze conflicting positions and generate balanced group statements that capture majority and minority perspectives. This could transform how businesses handle everything from labor talks to merger discussions. Researchers and business consultants say AI could help parties find common ground faster than traditional mediation. "We have to work together, we have to collaborate," Hanne Wulp, executive consultant and founder of Communication Wise told PYMNTS. "These hardened, far out-of-the-middleground perceptions don't come in handy. There won't be many others to collaborate with. When AI-driven mediation can soften, or tweak, that lens slightly, just by gathering and framing perceptions in a neutral, non-confrontational/collaborative way, it can enhance collaboration." The new AI tool developed by Google DeepMind shows promise in bridging ideological divides through group discussions. In a study published in Science, researchers found that their "Habermas Machine" -- based on the Chinchilla language model -- effectively synthesized opposing viewpoints into consensus statements. Testing with 439 UK residents revealed that 56% preferred AI-generated summaries over human mediators. The system could improve citizens' assemblies and public policy discussions by creating more balanced, representative statements -- or even have commercial implications. "Group statements generated by AI can integrate the needs, opinions, and cultural backgrounds of different consumers," Alex Li, Founder of AI company StudyX, told PYMNTS. "This inclusive marketing strategy can resonate with more consumers, enhance brand appeal, and finally influence consumer behavior, making them inclined to choose products that align with their values." AI systems are increasingly helping businesses reach consensus in complex commercial decisions. OpenAI's Swarm Framework allows multiple AI agents to work together, streamlining decision-making. Google's Gemini models enhance negotiation capabilities, helping companies align transaction interests. IBM's Watson assists supply chain management by analyzing data from different stakeholders, leading to agreed-upon solutions for sourcing and logistics. Additionally, platforms like Pactum automate contract negotiations, ensuring fair terms for all parties. Not everyone's a fan of AI taking charge of sending out group statements. Michael Taylor, CEO of SchellingPoint, which manages what he describes as "the world's largest database of real-time group decisions" with over 9 million data points, told PYMNTS he's skeptical about AI-generated group consensus. Taylor explains that when groups first discuss a shared topic, "17% of their opinions are like-minded, and 83% are non-likeminded." Using a framework based on Harvard Professor Chris Argyris's work, his organization analyzes why people agree or disagree. He identifies key patterns in group disagreement: "30% of the time" differences stem from varying access to information, while "65% of the time" disputes arise from different interpretations of terminology. He argues that both cases can be resolved through understanding rather than compromise. "Replacing the reconciliation of non-aligned opinions with suggested group statements to gain consensus would significantly compromise the accuracy and integrity of the strategies, policies, decisions and changes these groups are forming," Taylor warns. Instead, SchellingPoint has developed an AI system that analyzes group thinking patterns and helps determine accurate conclusions rather than seeking consensus for consensus's sake.
[2]
Ai Tool Helps Groups With Differing Views Find Consensus Easily
AI could revolutionise conflict resolution in citizen deliberations Artificial intelligence is showing promise in helping people with opposing views find common ground. A new tool developed by Google DeepMind uses AI to mediate discussions and synthesise viewpoints into balanced summaries. In recent trials, participants favoured the AI-generated summaries over those created by human mediators, suggesting that AI could be an effective mediator for complex discussions. This was highlighted in a study published in Science on October 17, 2024. The system, named the Habermas Machine after philosopher Jürgen Habermas, was created by a team led by Christopher Summerfield, Research Director at the UK AI Safety Institute. The AI tool was tested with groups of participants who discussed topics related to UK public policy. The AI was tasked with summarising their differing views into a coherent, representative statement. These summaries were then rated by participants. Surprisingly, 56% preferred the AI-generated summaries, with the remainder favouring the human mediator's version. Independent reviewers also gave higher scores to the AI-generated summaries for fairness and clarity. Summerfield explained that the AI model was designed to produce summaries that received the most endorsement from group members, by learning from their preferences. While this is an encouraging step towards improving deliberative processes, it also highlights the challenges of scaling such democratic initiatives. The research involved an experiment with 439 UK residents, where participants shared opinions on public policy questions. Each group's responses were processed by the AI, which then generated a combined summary. Notably, the study found that the AI tool enhanced agreement among group members, indicating potential for policy-making in real-world settings. Ethan Busby, an AI researcher at Brigham Young University, sees this as a promising direction, but others, like Sammy McKinney from the University of Cambridge, urge caution regarding the reduced human connection in such debates.
[3]
Will LLMs become the ultimate mediators for better and for worse? DeepMind researchers and Reddit users seem to agree on that
Taking humans out of the loop to reach an amicable resolution could be the way forward, some AI experts believe AI experts believe large language models (LLMs) could serve a purpose as mediators in certain scenarios where agreements can't be reached between individuals. A recent study by researchers at Google DeepMind sought to explore the potential for LLMs to be used in this regard, particularly in terms of solving incendiary disputes amidst the contentious political climate globally. "Finding agreements through a free exchange of views is often difficult," the study authors noted. "Collective deliberation can be slow, difficult to scale, and unequally attentive to different voices." As part of the project, the team at DeepMind trained a series of LLMs dubbed 'Habermas Machines' (HM) to act as mediators. These models were trained specifically to identify common, overlapping beliefs between individuals on either end of the political spectrum. Topics covered by the LLM included divisive issues such as immigration, Brexit, minimum wages, universal childcare, and climate change. "Using participants' personal opinions and critiques, the AI mediator iteratively generates and refines statements that express common ground among the group on social or political issues," the authors wrote. The project also saw volunteers engage with the model, which drew upon the opinions and perspectives of each individual on certain political topics. Summarized documents on volunteer political views were then collated by the model, which provided further context to help bridge divides. The results were very promising, with the study revealing volunteers rated statements made by the HM higher than those made by human statements on the same issues. Moreover, after volunteers were split into groups to further discuss these topics, researchers discovered that participants were less divided on these issues after reading statements from the HMs compared to human mediator documents. "Group opinion statements generated by the Habermas Machine were consistently preferred by group members over those written by human mediators and received higher ratings from external judges for quality, clarity, informativeness, and perceived fairness," researchers concluded. "AI-mediated deliberation also reduced division within groups, with participants' reported stances converging toward a common position on the issue after deliberation; this result did not occur when discussants directly exchanged views, unmediated." The study noted that "support for the majority position" on certain topics increased after AI-supported deliberation. However, the HMs "demonstrably incorporated minority critiques into revised statements". What this suggests, researchers said, is that during AI-mediated deliberation, the "views of groups of discussants tended to move in a similar direction on controversial issues". "These shifts were not attributable to biases in the AI, suggesting that the deliberation process genuinely aided the emergence of shared perspectives on potentially polarizing social and political issues." There are already real-world examples of LLMs being used to solve disputes, particularly in relationships, with some users on Reddit having reported using the ChatGPT, for example. One user reported their partner used the chatbot "every time" they have a disagreement and that this was causing friction. "Me (25) and my girlfriend (28) have been dating for the past 8 months. We've had a couple of big arguments and some smaller disagreements recently," the user wrote. "Each time we argue my girlfriend will go away and discuss the argument with ChatGPT, even doing so in the same room sometimes." Notably, the user found on these occasions, their partner could "come back with a well constructed argument" breaking down everything said or done during a previous argument. It's this aspect of the situation that's caused significant tension though. "I've explained to her that I don't like her doing so as it can feel like I'm being ambushed with thoughts and opinions from a robot," they wrote. "It's nearly impossible for a human being to remember every small detail and break it down bit by bit, but AI has no issue doing so." "Whenever I've voiced my upset I've been told that 'ChatGPT says you're insecure' or 'ChatGPT says you don't have the emotional bandwidth to understand what I'm saying'."
[4]
AI tool helps people with opposing views find common ground
A chatbot-like tool powered by artificial intelligence (AI) can help people with differing views to find areas of agreement, an experiment with online discussion groups has shown. The model, developed by Google DeepMind in London, was able to synthesise diverging opinions and produce summaries of each group's position that took different perspectives into account. Participants preferred the AI-generated summaries to ones written by human mediators, suggesting such tools could be used to help support complex deliberations. The study was published in science on 17 October. "You can see it as sort of proof of concept that you can use AI, and specifically, large language models, to fulfil part of the function that is fulfilled by current citizens assemblies and deliberative polls," says Christopher Summerfield, co-author of the study and research director at the UK AI Safety Institute. "People need to find common ground because collective action requires agreement." Democratic initiatives like citizen's assemblies, where groups of people are asked to share their opinions on public policy issues, help ensure that politicians hear a wide variety of perspectives. But scaling up these initiatives can be tricky, and these discussions are typically restricted to small groups to ensure all voices are heard. Intrigued by research into the potential of LLMs to support these discussions, Summerfield and his colleagues developed a study to assess how AI could be at helping people with opposing viewpoints reach a compromise. They deployed a fine-tuned version of the pre-trained DeepMind LLM Chinchilla, which they named the Habermas Machine, after the philosopher Jürgen Habermas, who developed a theory about how rational discussion can help solve conflict. To test their model, the researchers recruited 439 UK residents, who were sorted into smaller groups. Each group discussed three questions related to UK public policy, sharing their personal opinions on each. These opinions were then fed to the AI machine, which generated overarching statements that combined all participants' viewpoints. Participants were able to rank each statement, and share critiques on them, which the AI then incorporated into a final summary of the group's collective view. "The model is trained to try to produce a statement which will garner maximum endorsement by a group of people who have volunteered their opinions," says Summerfield. "Because the model learns what your preferences are over these statements, it can then produce a statement which is most likely to satisfy everyone." Alongside the AI, one participant was chosen to be a mediator. They were also told to produce a summary that best incorporated all participant's views. Participants were shown both the AI's and the mediator's final summaries, and asked to rate them. Most participants rated the summaries written by the AI as better than those by the mediator. 56% of participants preferred the AI's work, compared to 44% who preferred the human summary. External reviewers were also asked to assess the summaries, and gave the AI ones higher ratings for fairness, quality and clarity. The research team then recruited a group of participants demographically representative of the UK population for a virtual citizen's assembly. In this scenario, group agreement on contentious topics increased after interacting with the AI. This finding suggests that if incorporated into a real citizen's assembly, AI tools could make it easier for leaders to produce policy proposals that take different perspectives into account. "The LLM could be used in many ways to assist in deliberations and serve roles previously reserved for human moderators," says Ethan Busby, who researches how AI tools could improve democratic societies at Brigham Young University in Provo, Utah. "I think of this as the cutting edge of work in this space that has a big potential to address pressing social and political problems." Summerfield adds that AI could even help to make conflict-resolution processes faster and more efficient. "Actually applying these technologies into deliberative experiments and processes is really good to see," says Sammy McKinney, who researches deliberative democracy and its intersections with artificial intelligence at the University of Cambridge, UK. But he adds that researchers should carefully consider the potential impacts of AI on the human aspect of deliberation. "A key reason to support citizen deliberation is that it creates certain kinds of spaces for people to relate to each other," he says. "By removing more human contact and human facilitation, what are we losing?" Summerfield acknowledges the limitations associated with AI technologies like these. "We did not train the model to try to intervene in the deliberation," he says, which means that the model's statement could include extremist or other problematic beliefs. He adds that rigorous research on the impact AI has on society is crucial to understanding its value. "Proceeding with caution seems important to me," says McKinney, "and then taking steps to, where possible, mitigate those concerns."
[5]
AI mediation tool may help reduce culture war rifts, say researchers
System built by Google DeepMind team takes individual views and generates a set of group statements Artificial intelligence could help reduce some of the most contentious culture war divisions through a mediation process, researchers claim. Experts say a system that can create group statements that reflect majority and minority views is able to help people find common ground. Prof Chris Summerfield, a co-author of the research from the University of Oxford who also works for Google DeepMind, said the AI tool could have multiple purposes. "What I would like to see it used for is to give political leaders in the UK a better sense of what people in the UK really think," he said, noting surveys gave only limited insights, while forums known as citizens' assemblies were often costly, logistically challenging and restricted in size. Writing in the journal Science, Summerfield and colleagues from Google DeepMind report how they built the "Habermas Machine" - an AI system named after the German philosopher Jürgen Habermas. The system works by taking written views of individuals within a group and using them to generate a set of group statements designed to be acceptable to all. Group members can then rate these statements, a process that not only trains the system but allows the statement with the greatest endorsement to be selected. Participants can also feed critiques of this initial group statement back into the Habermas Machine to result in a second collection of AI-generated statements that can again be ranked, and a final revised text selected. The team used the system in a series of experiments involving a total of more than 5,000 participants in the UK, many of whom were recruited through an online platform. In each experiment, the researchers asked participants to respond to topics, ranging from the role of monkeys in medical research to religious teaching in public education. In one experiment, involving about 75 groups of six participants, the researchers found the initial group statement from the Habermas Machine was preferred by participants 56% of the time over a group statement produced by human mediators. The AI-based efforts were also rated as higher quality, clearer and more informative among other traits. Another series of experiments found the full two-step process with the Habermas Machine boosted the level of group agreement relative to participants' initial views before the AI-mediation began. Overall, the researchers found agreement increased by eight percentage points on average, equivalent to four people out of 100 switching their view on a topic where opinions were originally evenly split. However the researchers stress it was not the case that participants always came off the fence, or switched opinion, to back the majority view. The team found similar results when they used the Habermas Machine in a virtual citizens' assembly in which 200 participants, representative of the UK population, were asked to deliberate on questions relating to topics ranging from Brexit to universal childcare. The researchers say further analysis, looking at the way the AI system represents the texts it is given numerically, shed light on how it generates group statements. "What [the Habermas Machine] seems to be doing is broadly respecting the view of the majority in each of our little groups, but kind of trying to write a piece of text that doesn't make the minority feel deeply disenfranchised - so it sort of acknowledges the minority view," said Summerfield. However the Habermas Machine itself has proved controversial, with other researchers noting the system does not help with translating democratic deliberations into policy. Dr Melanie Garson, an expert in conflict resolution at UCL, added while she was a tech optimist, one concern was that some minorities might be too small to influence such group statements, yet could be disproportionately affected by the result. She also noted that the Habermas Machine does not offer participants the chance to explain their feelings, and hence develop empathy with those of a different view. Fundamentally, she said, when using technology, context is key. "[For example] how much value does this deliver in the perception that mediation is more than just finding agreement?" Garson said. "Sometimes, if it's in the context of an ongoing relationship, it's about teaching behaviours."
[6]
AI could help people find common ground during deliberations
The researchers then set out to test whether the HM could act as a useful AI mediation tool. Participants were divided up into six-person groups, with one participant in each randomly assigned to write statements on behalf of the group. This person was designated the "mediator." In each round of deliberation, participants were presented with one statement from the human mediator and one AI-generated statement from the HM and asked which they preferred. More than half (56%) of the time, the participants chose the AI statement. They found these statements to be of higher quality than those produced by the human mediator and tended to endorse them more strongly. After deliberating with the help of the AI mediator, the small groups of participants were less divided in their positions on the issues. Although the research demonstrates that AI systems are good at generating summaries reflecting group opinions, it's important to be aware that their usefulness has limits, says Joongi Shin, a researcher at Aalto University who studies generative AI. "Unless the situation or the context is very clearly open, so they can see the information that was inputted into the system and not just the summaries it produces, I think these kinds of systems could cause ethical issues," he says. Google DeepMind did not explicitly tell participants in the human mediator experiment that an AI system would be generating group opinion statements, although it indicated on the consent form that algorithms would be involved. "It's also important to acknowledge that the model, in its current form, is limited in its capacity to handle certain aspects of real-world deliberation," Tessler says. "For example, it doesn't have the mediation-relevant capacities of fact-checking, staying on topic, or moderating the discourse." Figuring out where and how this kind of technology could be used in the future would require further research to ensure responsible and safe deployment. The company says it has no plans to launch the model publicly.
[7]
DeepMind researchers find LLMs can serve as effective mediators
A team of AI researchers with Google's DeepMind London group has found that certain large language models (LLMs) can serve as effective mediators between groups of people with differing viewpoints regarding a given topic. The work is published in the journal Science. Over the past several decades, political divides have become common in many countries -- most have been labeled as either liberal or conservative. The advent of the internet has served as fuel, allowing people from either side to promote their opinions to a wide audience, generating anger and frustration. Unfortunately, no tools have surfaced to diffuse the tension of such a political climate. In this new effort, the team at DeepMind suggests AI tools such as LLMs may fill that gap. To find out if LLMs could serve as effective mediators, the researchers trained LLMs called Habermas Machines (HMs) to serve as caucus mediators. As part of their training, the LLMs were taught to identify areas of overlap between viewpoints of people in opposing groups -- but not to try to change anyone's opinions. The research team used a crowdsourcing platform to test their LLM's ability to mediate. Volunteers were asked to interact with an HM, which then attempted to gain perspective on the views of the volunteers about certain political topics. The HM then produced a document summarizing the views of the volunteers, in which it was prompted to give more weight to areas of overlap between the two groups. The document was then given to all the volunteers who were asked to offer a critique, whereupon the HM modified the document to take the suggestions into account. Finally, the volunteers were divided into six-person groups and took turns serving as mediators for statement critiques that were compared to statements presented by the HM. The researchers found that the volunteers rated the statements made by the HM as higher in quality than the humans' statements 56% of the time. After allowing the volunteers to deliberate, the researchers found that the groups were less divided on their issues after reading the material from the HMs than reading the document from the human mediator.
Share
Share
Copy Link
Google DeepMind's "Habermas Machine" demonstrates potential to facilitate consensus in group discussions and policy deliberations, raising both excitement and concerns about AI's role in conflict resolution.
A groundbreaking artificial intelligence system developed by Google DeepMind has demonstrated significant potential in facilitating consensus among groups with opposing viewpoints. The "Habermas Machine," named after philosopher Jürgen Habermas, has shown promise in synthesizing diverse opinions into balanced group statements, potentially transforming how businesses and society handle complex negotiations and policy discussions [1][2].
The AI tool, based on the Chinchilla language model, was designed to analyze conflicting positions and generate consensus statements that capture both majority and minority perspectives. In a study published in Science, researchers tested the system with 439 UK residents discussing public policy issues [3][4].
Key findings from the study include:
The Habermas Machine's success in mediating discussions has sparked interest in various fields:
Business negotiations: The AI could streamline decision-making in complex commercial scenarios, from labor talks to merger discussions [1].
Public policy: The tool could improve citizens' assemblies and public policy discussions by creating more balanced, representative statements [3].
Marketing strategies: AI-generated group statements could help integrate diverse consumer perspectives, potentially enhancing brand appeal and influencing consumer behavior [1].
Conflict resolution: The system's ability to reduce division within groups suggests potential applications in addressing contentious social and political issues [3][5].
While the Habermas Machine shows promise, some experts have raised concerns:
Loss of human connection: Critics worry that reducing human facilitation in deliberations may compromise important aspects of interpersonal communication [4].
Oversimplification of complex issues: Some experts argue that AI-generated consensus might not adequately address the nuances of group disagreements [1].
Potential for bias: There are concerns about the AI's ability to fairly represent minority viewpoints and avoid reinforcing existing biases [5].
Limited scope: The current system does not intervene in deliberations or address how to translate democratic discussions into concrete policies [5].
As AI continues to evolve, its role in conflict resolution and group decision-making is likely to expand. Researchers emphasize the need for careful consideration of AI's impact on human interactions and rigorous testing to understand its value in various contexts [4][5].
The Habermas Machine represents a significant step forward in using AI to support complex deliberations. However, as with any emerging technology, its implementation will require balancing the potential benefits with ethical considerations and the preservation of essential human elements in decision-making processes.
Reference
[2]
As AI continues to reshape the business landscape, leaders are exploring its potential in learning, development, and human interaction. While AI offers numerous benefits, experts emphasize the importance of maintaining trust, inclusivity, and human-centric approaches in its implementation.
5 Sources
A growing trend of using ChatGPT in relationship disputes raises concerns about the impact of AI on interpersonal communication and conflict resolution.
4 Sources
As artificial intelligence continues to evolve at an unprecedented pace, experts debate its potential to revolutionize industries while others warn of the approaching technological singularity. The manifestation of unusual AI behaviors raises concerns about the widespread adoption of this largely misunderstood technology.
2 Sources
A meta-analysis by MIT researchers shows that human-AI collaboration is not always beneficial, with AI outperforming in decision-making tasks while human-AI teams excel in creative tasks.
6 Sources
A recent study suggests that AI-powered chatbots, like ChatGPT, may be effective in softening the beliefs of conspiracy theorists. The research indicates that engaging with AI could lead to more balanced views on controversial topics.
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved