2 Sources
2 Sources
[1]
AI can 'same-ify' human expression -- can some brains resist its pull?
At the heart of today's artificial-intelligence models are vast bodies of training data -- text, videos and images created by real people and used to teach models how to recognize patterns and generate content. People are certainly training AI systems -- but are they training us, as well? A growing number of papers report that people tend to pick up writing patterns, reasoning methods and even opinions from the large language models (LLMs) they use. Some researchers say that this influence threatens to create a sameness among human writing and warn that the effect could even extend to text written by humans who aren't first-hand AI users. "If people around you are interacting with these LLMs and adopting their writing styles, perspectives and reasoning, at some point it would surround you so much that it would seem like the more socially correct way to frame information," says Zhivar Sourati, a computer scientist at the University of Southern California in Los Angeles and co-author of an opinion piece, published today in Trends in Cognitive Science, arguing that LLMs are homogenizing human discourse. But others assert that the human mind might still resist AI's flattening effect. In one study, which was posted as a preprint to the arXiv server in November, the authors identified groups of writers who preserve "distinctively human stylistic signatures, possibly valuing authenticity over efficiency gains offered by AI assistance". The study has not yet been peer reviewed. In a preprint posted on arXiv last year, which has not undergone peer review, Sourati and his co-authors analysed Reddit posts, news content and pre-print studies from both before and after ChatGPT was launched in 2022. The team found that text published after the platform's release tends to be less stylistically diverse than text from before. In today's opinion piece, the authors argue that this phenomenon plagues people's perspectives and reasoning as well. They point to an unreviewed 2023 preprint, posted on arXiv, in which participants engaged with LLMs that expressed either positive or negative feelings about social media. After this exposure, participants' own opinions shifted towards those produced by LLMs. The authors "make a fair point" in saying that individuals can benefit from AI, which "helps you write better and makes you more understandable to other people", says Oliver Hauser, who researches economics and AI at the University of Exeter, UK. "But as soon as this adoption is widespread, the collective suffers the most." In a study published today in Science Advances, researchers found that people's opinions on social issues start to echo those they've picked up from an AI tool. Participants used AI helpers to write about sociopolitical topics, such as the death penalty. Afterwards, they tended to express attitudes on those topics more similar to what the LLMs wrote than did a control group that didn't use AI. This effect could also eventually reduce diversity of political views, says co-author Sterling Williams-Ceci, an information scientist at Cornell University in Ithaca, New York. But the precise effect would depend on the leanings that the various LLMs express. Importantly, study participants didn't notice that they had been influenced by the chatbots. And even when participants were told that the AI could bias their viewpoints, the results remained unchanged. At the moment, "we don't know how to prevent this", Williams-Ceci says. But the answer probably isn't as simple as giving users a plain warning. The sameness imposed by AI tools can also narrow scientific thought, Hauser says. "We might lose out on a crazy idea that seems outrageous, but turns out to be the exact thing we need for a breakthrough." A paper published in Nature in January found that scientists who used AI tools for their research tended to focus on a more restricted set of fields than those who did not. But not all studies show that AI use leads to homogeneity. The November preprint reported that human writers evolve in different ways while engaging with AI. The writing of some people does indeed become more similar to the AI model's; others maintain their own personal writing style, or develop one that becomes even more markedly distinct from the AI's. Studies raising concerns that AI is driving out the diversity in human writing prompted doubts from Yun Wan, who researches human-AI collaboration at the University of Houston-Downtown in Texas. "We thought that might not be the case," he says. Wan and his co-authors created ten unique AI personas -- with a variety of cultural backgrounds and thinking styles -- that generated plots that were used by human participants to write creative stories. The authors' analysis, posted last year on arXiv without peer review, showed that the resulting stories had a similar level of diversity to stories written by humans with no AI assistance. Wan says that, because of their diverse responses, these custom personas are one way to reduce homogeneity.
[2]
AI Chatbots Are Making People All Think the Same, Study Says
Julian is a contributor and former staff writer at CNET. He's covered a range of topics, such as tech, crypto travel, sports and commerce. His past work has appeared at print and online publications, including New Mexico Magazine, TV Guide, Mental Floss and NextAdvisor with TIME. On his days off, you can find him at Isotopes Park in Albuquerque watching the ballgame. Part of what makes us human is the unique ways we think and solve problems. But using large language models like ChatGPT might be eroding this uniqueness and leading humans to think and communicate the same way, according to a group of scientists and psychologists who have co-authored a new opinion paper. "Individuals differ in how they write, reason, and view the world," Zhivar Sourati, a computer scientist of the University of Southern California and first author for the paper, said in a statement. "When these differences are mediated by the same LLMs, their distinct linguistic style, perspective and reasoning strategies become homogenized, producing standardized expressions and thoughts across users," Sourati continued. The paper, published Wednesday in the journal Trends in Cognitive Sciences, examines how hundreds of millions of people worldwide use the same handful of chatbots and what that means for our individuality. Pew Research found that one-third of all Americans used ChatGPT last year, double the 2023 figure. And chatbot use is much more common among teens: Two-thirds say they use chatbots, and almost a third use them daily. Businesses are also going all in on artificial intelligence. Stanford found that 78% of organizations reported using AI in 2024, up from 55% in 2023. So we're using AI a lot. But the danger is that we could lose the diversity in the ways we think. The team points out that LLMs generate writing that varies less than what people come up with on their own. Part of the reason LLMs may be pushing homogenized thought, according to the paper's authors, is the data used to train them. "Because LLMs are trained to capture and reproduce statistical regularities in their training data, which often overrepresent dominant languages and ideologies, their outputs often mirror a narrow and skewed slice of human experience," Sourati says. There's a good reason why the authors warn against this trend. Homogenized thought reduces pluralism, which is essentially the idea that multiple perspectives are good for society as a whole. "This value of pluralism is rooted in the long-held principle that sound judgment requires exposure to varied thought," the authors write in the paper. "Unchecked, this homogenization risks flattening the cognitive landscapes that drive collective intelligence and adaptability," So we use different ways of thinking to figure out more solutions to a problem. If we lose the ability to think and communicate differently, it could affect how we adapt to new situations. "The concern is not just that LLMs shape how people write or speak, but that they subtly redefine what counts as credible speech, correct perspective, or even good reasoning," Sourati says. The authors also say that this trend even impacts people who don't use chatbots. "If a lot of people around me are thinking and speaking in a certain way, and I do things differently, I would feel a pressure to align with them, because it would seem like a more credible or socially acceptable way of expressing my ideas," Sourati says.
Share
Share
Copy Link
Scientists warn that widespread use of AI chatbots like ChatGPT is creating sameness in how people write, reason, and think. New research published in Trends in Cognitive Science reveals that large language models are reducing diversity in discourse, with effects extending even to non-users. One-third of Americans used ChatGPT last year, raising concerns about the erosion of unique individual thinking.
People are adopting writing patterns, reasoning methods, and even opinions from the large language models they interact with, according to a growing body of research examining AI-mediated communication. Zhivar Sourati, a computer scientist at the University of Southern California, co-authored an opinion piece published in Trends in Cognitive Science arguing that these tools are homogenizing human expression in ways that threaten collective intelligence
1
. The concern extends beyond individual users: "If people around you are interacting with these LLMs and adopting their writing styles, perspectives and reasoning, at some point it would surround you so much that it would seem like the more socially correct way to frame information," Sourati explains1
.Analysis of Reddit posts, news content, and pre-print studies from before and after ChatGPT launched in 2022 revealed that text published after the platform's release tends to be less stylistically diverse than earlier content
1
. This AI homogenization effect appears widespread: Pew Research found that one-third of all Americans used ChatGPT last year, double the 2023 figure, while two-thirds of teens report using chatbots2
. Stanford data shows 78% of organizations reported using AI in 2024, up from 55% in 20232
.
Source: Nature
The convergence in how people think stems partly from how large language models are built. "Because LLMs are trained to capture and reproduce statistical regularities in their training data, which often overrepresent dominant languages and ideologies, their outputs often mirror a narrow and skewed slice of human experience," Sourati notes
2
. Research published in Science Advances found that people's opinions on social issues start to echo those picked up from AI tools. Participants who used AI helpers to write about sociopolitical topics like the death penalty expressed attitudes more similar to what the LLMs wrote than a control group1
.Crucially, study participants didn't notice they had been influenced by the chatbots. Even when told that AI could bias their viewpoints, the results remained unchanged. "At the moment, we don't know how to prevent this," says Sterling Williams-Ceci, an information scientist at Cornell University
1
. This unconscious adoption of AI-generated perspectives could eventually reduce diversity of political views and erode unique individual thinking patterns.
Source: CNET
The societal implications of AI reach beyond everyday communication into scientific research itself. "We might lose out on a crazy idea that seems outrageous, but turns out to be the exact thing we need for a breakthrough," warns Oliver Hauser, who researches economics and AI at the University of Exeter
1
. A paper published in Nature in January found that scientists who used AI tools for their research tended to focus on a more restricted set of fields than those who did not1
.This trend threatens societal pluralism and collective intelligence. "Unchecked, this homogenization risks flattening the cognitive landscapes that drive collective intelligence and adaptability," the authors write
2
. The value of diverse perspectives in reaching sound judgment could diminish as reducing diversity in discourse becomes normalized. "The concern is not just that LLMs shape how people write or speak, but that they subtly redefine what counts as credible speech, correct perspective, or even good reasoning," Sourati explains2
.Related Stories
Not all research points toward uniformity. A November preprint posted to arXiv identified groups of writers who preserve "distinctively human stylistic signatures, possibly valuing authenticity over efficiency gains offered by AI assistance," though the study has not yet undergone peer review
1
. The writing of some people becomes more similar to AI models, while others maintain their personal writing styles or develop ones that become even more markedly distinct1
.Research by Yun Wan at the University of Houston-Downtown tested whether diverse AI personas could maintain variety in human-AI collaboration. His team created ten unique AI personas with different cultural backgrounds and thinking styles that generated plots for creative stories. Analysis showed the resulting stories had similar diversity to stories written by humans with no AI assistance, though this work also awaits peer review
1
. These findings suggest that how AI tools are designed and deployed may determine whether they flatten or preserve the diversity of ideas that characterizes human expression.Summarized by
Navi
1
Technology

2
Policy and Regulation

3
Business and Economy
