3 Sources
3 Sources
[1]
AI can 'same-ify' human expression -- can some brains resist its pull?
At the heart of today's artificial-intelligence models are vast bodies of training data -- text, videos and images created by real people and used to teach models how to recognize patterns and generate content. People are certainly training AI systems -- but are they training us, as well? A growing number of papers report that people tend to pick up writing patterns, reasoning methods and even opinions from the large language models (LLMs) they use. Some researchers say that this influence threatens to create a sameness among human writing and warn that the effect could even extend to text written by humans who aren't first-hand AI users. "If people around you are interacting with these LLMs and adopting their writing styles, perspectives and reasoning, at some point it would surround you so much that it would seem like the more socially correct way to frame information," says Zhivar Sourati, a computer scientist at the University of Southern California in Los Angeles and co-author of an opinion piece, published today in Trends in Cognitive Science, arguing that LLMs are homogenizing human discourse. But others assert that the human mind might still resist AI's flattening effect. In one study, which was posted as a preprint to the arXiv server in November, the authors identified groups of writers who preserve "distinctively human stylistic signatures, possibly valuing authenticity over efficiency gains offered by AI assistance". The study has not yet been peer reviewed. In a preprint posted on arXiv last year, which has not undergone peer review, Sourati and his co-authors analysed Reddit posts, news content and pre-print studies from both before and after ChatGPT was launched in 2022. The team found that text published after the platform's release tends to be less stylistically diverse than text from before. In today's opinion piece, the authors argue that this phenomenon plagues people's perspectives and reasoning as well. They point to an unreviewed 2023 preprint, posted on arXiv, in which participants engaged with LLMs that expressed either positive or negative feelings about social media. After this exposure, participants' own opinions shifted towards those produced by LLMs. The authors "make a fair point" in saying that individuals can benefit from AI, which "helps you write better and makes you more understandable to other people", says Oliver Hauser, who researches economics and AI at the University of Exeter, UK. "But as soon as this adoption is widespread, the collective suffers the most." In a study published today in Science Advances, researchers found that people's opinions on social issues start to echo those they've picked up from an AI tool. Participants used AI helpers to write about sociopolitical topics, such as the death penalty. Afterwards, they tended to express attitudes on those topics more similar to what the LLMs wrote than did a control group that didn't use AI. This effect could also eventually reduce diversity of political views, says co-author Sterling Williams-Ceci, an information scientist at Cornell University in Ithaca, New York. But the precise effect would depend on the leanings that the various LLMs express. Importantly, study participants didn't notice that they had been influenced by the chatbots. And even when participants were told that the AI could bias their viewpoints, the results remained unchanged. At the moment, "we don't know how to prevent this", Williams-Ceci says. But the answer probably isn't as simple as giving users a plain warning. The sameness imposed by AI tools can also narrow scientific thought, Hauser says. "We might lose out on a crazy idea that seems outrageous, but turns out to be the exact thing we need for a breakthrough." A paper published in Nature in January found that scientists who used AI tools for their research tended to focus on a more restricted set of fields than those who did not. But not all studies show that AI use leads to homogeneity. The November preprint reported that human writers evolve in different ways while engaging with AI. The writing of some people does indeed become more similar to the AI model's; others maintain their own personal writing style, or develop one that becomes even more markedly distinct from the AI's. Studies raising concerns that AI is driving out the diversity in human writing prompted doubts from Yun Wan, who researches human-AI collaboration at the University of Houston-Downtown in Texas. "We thought that might not be the case," he says. Wan and his co-authors created ten unique AI personas -- with a variety of cultural backgrounds and thinking styles -- that generated plots that were used by human participants to write creative stories. The authors' analysis, posted last year on arXiv without peer review, showed that the resulting stories had a similar level of diversity to stories written by humans with no AI assistance. Wan says that, because of their diverse responses, these custom personas are one way to reduce homogeneity.
[2]
AI Chatbots Are Making People All Think the Same, Study Says
Julian is a contributor and former staff writer at CNET. He's covered a range of topics, such as tech, crypto travel, sports and commerce. His past work has appeared at print and online publications, including New Mexico Magazine, TV Guide, Mental Floss and NextAdvisor with TIME. On his days off, you can find him at Isotopes Park in Albuquerque watching the ballgame. Part of what makes us human is the unique ways we think and solve problems. But using large language models like ChatGPT might be eroding this uniqueness and leading humans to think and communicate the same way, according to a group of scientists and psychologists who have co-authored a new opinion paper. "Individuals differ in how they write, reason, and view the world," Zhivar Sourati, a computer scientist of the University of Southern California and first author for the paper, said in a statement. "When these differences are mediated by the same LLMs, their distinct linguistic style, perspective and reasoning strategies become homogenized, producing standardized expressions and thoughts across users," Sourati continued. The paper, published Wednesday in the journal Trends in Cognitive Sciences, examines how hundreds of millions of people worldwide use the same handful of chatbots and what that means for our individuality. Pew Research found that one-third of all Americans used ChatGPT last year, double the 2023 figure. And chatbot use is much more common among teens: Two-thirds say they use chatbots, and almost a third use them daily. Businesses are also going all in on artificial intelligence. Stanford found that 78% of organizations reported using AI in 2024, up from 55% in 2023. So we're using AI a lot. But the danger is that we could lose the diversity in the ways we think. The team points out that LLMs generate writing that varies less than what people come up with on their own. Part of the reason LLMs may be pushing homogenized thought, according to the paper's authors, is the data used to train them. "Because LLMs are trained to capture and reproduce statistical regularities in their training data, which often overrepresent dominant languages and ideologies, their outputs often mirror a narrow and skewed slice of human experience," Sourati says. There's a good reason why the authors warn against this trend. Homogenized thought reduces pluralism, which is essentially the idea that multiple perspectives are good for society as a whole. "This value of pluralism is rooted in the long-held principle that sound judgment requires exposure to varied thought," the authors write in the paper. "Unchecked, this homogenization risks flattening the cognitive landscapes that drive collective intelligence and adaptability," So we use different ways of thinking to figure out more solutions to a problem. If we lose the ability to think and communicate differently, it could affect how we adapt to new situations. "The concern is not just that LLMs shape how people write or speak, but that they subtly redefine what counts as credible speech, correct perspective, or even good reasoning," Sourati says. The authors also say that this trend even impacts people who don't use chatbots. "If a lot of people around me are thinking and speaking in a certain way, and I do things differently, I would feel a pressure to align with them, because it would seem like a more credible or socially acceptable way of expressing my ideas," Sourati says.
[3]
Researchers Say AI Is Homogenizing Human Expression and Thought
There has been a creeping sense that, since large language models like ChatGPT have become publicly accessible and more widely used, people have started writing and sounding more like AI-generated content. (Of course, AI is just doing an impression of an amalgam of human-created material, so there's a clear chicken and egg here.) Now there is evidence to support that sneaking suspicion. In a paper published Wednesday in the journal Trends in Cognitive Sciences, researchers at the University of Southern California warned that the use of LLMs risks flattening human thought and creativity. The team of researchers analyzed more than 130 studies to better understand how large language models affect cognitive diversity, examining research across a variety of fields from linguistics to computer science. The team found that, despite the fact that AI models pull from a huge database of information, they consistently produce outputs that are less varied than human thought. That is in part because, while these models may be trained on a seemingly endless supply of human-produced thoughts and ideas, they're not capable of actually processing all that material in a way that considers the diversity of opinion available. Instead, LLMs tend to favor consistent patterns that they can identify in the training data, which is part of the reason some critiques of the models refer to them as a kind of glorified autocomplete. “Because LLMs are trained to capture and reproduce statistical regularities in their training data, which often overrepresent dominant languages and ideologies, their outputs often mirror a narrow and skewed slice of human experience,†author and computer scientist Zhivar Sourati of the University of Southern California said in a statement. Some LLMs even advertise this fact. OpenAI explicitly states that ChatGPT is “skewed towards Western views,†for instance, and xAI has, pretty obviously, tweaked its chatbot Grok to reflect the views of CEO Elon Musk on more than one occasion. The result of interacting with models that significantly favor certain perspectives, though, is that humans then start to internalize and reflect those perspectives. This can be as simple as a person using a chatbot to polish their writing and remove some of their stylistic choices, but previous research has shown that interacting with LLMs can actually shift the way people think to be more in line with the information being provided to them by a chatbot. LLMs also use chain-of-thought reasoning, which reflects a linear form of thinking. They are incapable of more abstract styles of reasoning that may require leaps in logic that are not obvious but can be very effective. Perhaps one of the most interesting observations the researchers made was that, while individuals using LLMs to generate ideas often produce more volume (albeit with less creativity), groups of people actually produce fewer ideas when using LLMs compared to when they are simply tasked with collaborating and bouncing ideas off of each other. Basically, using the model locks people into a particular way of thinking and reduces the diversity of perspective that might otherwise come out of discussion and sharing experiences. It's been well understood for a while now that diversity of thought and experiences produces better outcomes for groups and organizations. That holds true as it relates to LLMs, which are essentially encouraged to seek consensus thought rather than diversity. Don't expect that problem to get corrected any time soon, either, considering the Trump administration issued an executive order effectively punishing any company that creates an AI model that promotes diversity.
Share
Share
Copy Link
Research published in Trends in Cognitive Sciences reveals that Large Language Models like ChatGPT are creating sameness in human writing and thinking. Scientists warn that AI influence extends beyond direct users, threatening cognitive diversity and collective intelligence as one-third of Americans now use these tools regularly.
A troubling pattern has emerged in how people write, think, and communicate since ChatGPT launched in 2022. Research published in Trends in Cognitive Sciences reveals that Large Language Models (LLMs) are driving thought homogenization across human discourse, creating a sameness that threatens cognitive diversity
1
. The study, led by Zhivar Sourati, a computer scientist at the University of Southern California, analyzed more than 130 studies across linguistics, computer science, and psychology to understand how AI homogenizing human expression has become a measurable phenomenon3
.
Source: Nature
The scale of this issue matters because AI adoption has accelerated dramatically. Pew Research found that one-third of all Americans used ChatGPT last year, double the 2023 figure, while two-thirds of teens now use chatbots, with nearly a third using them daily
2
. Stanford data shows that 78% of organizations reported using AI in 2024, up from 55% in 20232
. This widespread Large Language Models impact means millions are now exposed to the same narrow patterns of expression and reasoning.
Source: Gizmodo
Analysis of Reddit posts, news content, and pre-print studies from both before and after ChatGPT's 2022 launch shows that text published after the platform's release tends to be less stylistically diverse than text from before
1
. This convergence in how people think and communicate stems from how LLMs are trained. "Because LLMs are trained to capture and reproduce statistical regularities in their training data, which often overrepresent dominant languages and ideologies, their outputs often mirror a narrow and skewed slice of human experience," Sourati explains3
.
Source: CNET
The erosion of unique individual thinking extends beyond writing styles to actual opinions and reasoning methods. In a study published in Science Advances, participants who used AI helpers to write about sociopolitical topics like the death penalty tended to express attitudes more similar to what the LLMs wrote than did a control group that didn't use AI
1
. Sterling Williams-Ceci, an information scientist at Cornell University, warns this effect could eventually reduce diversity of political views, though the precise impact depends on the leanings various LLMs express1
.Perhaps most concerning is that study participants didn't notice they had been influenced by the chatbots. Even when participants were explicitly told that AI could bias their viewpoints, the results remained unchanged
1
. "At the moment, we don't know how to prevent this," Williams-Ceci admits, noting that the answer probably isn't as simple as giving users a plain warning1
.The risks of human-AI collaboration extend to standardizing expressions and reasoning in ways that limit problem-solving approaches. LLMs use chain-of-thought reasoning, which reflects a linear form of thinking but are incapable of more abstract styles of reasoning that may require leaps in logic
3
. While individuals using LLMs to generate ideas often produce more volume, groups of people actually produce fewer ideas when using LLMs compared to when they simply collaborate and bounce ideas off each other3
.Related Stories
The reduced diversity of ideas carries significant societal implications for pluralism and collective intelligence. "This value of pluralism is rooted in the long-held principle that sound judgment requires exposure to varied thought," the authors write, warning that "unchecked, this homogenization risks flattening the cognitive landscapes that drive collective intelligence and adaptability"
2
.Oliver Hauser, who researches economics and AI at the University of Exeter, notes that while AI "helps you write better and makes you more understandable to other people," the collective suffers when adoption becomes widespread
1
. AI's effect on creativity extends to scientific thought as well. A paper published in Nature in January found that scientists who used AI tools for their research tended to focus on a more restricted set of fields than those who did not1
. "We might lose out on a crazy idea that seems outrageous, but turns out to be the exact thing we need for a breakthrough," Hauser warns1
.The concern extends beyond direct users. "If people around you are interacting with these LLMs and adopting their writing styles, perspectives and reasoning, at some point it would surround you so much that it would seem like the more socially correct way to frame information," Sourati explains
1
. This creates pressure even for non-users to conform to AI-influenced patterns of expression. "The concern is not just that LLMs shape how people write or speak, but that they subtly redefine what counts as credible speech, correct perspective, or even good reasoning," Sourati adds2
.Some research suggests resistance is possible. A November preprint reported that human writers evolve in different ways while engaging with AI, with some maintaining their own personal writing style or developing one that becomes even more markedly distinct from the AI's
1
. Yun Wan at the University of Houston-Downtown found that when participants used ten unique AI personas with diverse cultural backgrounds and thinking styles to generate story plots, the resulting stories had a similar level of diversity to stories written by humans with no AI assistance1
. These findings suggest that how AI systems are designed and deployed may determine whether they flatten or preserve human creativity in the years ahead.Summarized by
Navi
15 Jul 2025•Technology

17 Mar 2026•Entertainment and Society

30 Apr 2025•Technology
