Large Language Models Are Homogenizing Human Expression, New Research Warns

2 Sources

Share

Scientists warn that widespread use of AI chatbots like ChatGPT is creating sameness in how people write, reason, and think. New research published in Trends in Cognitive Science reveals that large language models are reducing diversity in discourse, with effects extending even to non-users. One-third of Americans used ChatGPT last year, raising concerns about the erosion of unique individual thinking.

AI Influence on Writing Styles Spreads Beyond Direct Users

People are adopting writing patterns, reasoning methods, and even opinions from the large language models they interact with, according to a growing body of research examining AI-mediated communication. Zhivar Sourati, a computer scientist at the University of Southern California, co-authored an opinion piece published in Trends in Cognitive Science arguing that these tools are homogenizing human expression in ways that threaten collective intelligence

1

. The concern extends beyond individual users: "If people around you are interacting with these LLMs and adopting their writing styles, perspectives and reasoning, at some point it would surround you so much that it would seem like the more socially correct way to frame information," Sourati explains

1

.

Analysis of Reddit posts, news content, and pre-print studies from before and after ChatGPT launched in 2022 revealed that text published after the platform's release tends to be less stylistically diverse than earlier content

1

. This AI homogenization effect appears widespread: Pew Research found that one-third of all Americans used ChatGPT last year, double the 2023 figure, while two-thirds of teens report using chatbots

2

. Stanford data shows 78% of organizations reported using AI in 2024, up from 55% in 2023

2

.

Source: Nature

Source: Nature

Training Data Shapes the Homogenization of Human Thought

The convergence in how people think stems partly from how large language models are built. "Because LLMs are trained to capture and reproduce statistical regularities in their training data, which often overrepresent dominant languages and ideologies, their outputs often mirror a narrow and skewed slice of human experience," Sourati notes

2

. Research published in Science Advances found that people's opinions on social issues start to echo those picked up from AI tools. Participants who used AI helpers to write about sociopolitical topics like the death penalty expressed attitudes more similar to what the LLMs wrote than a control group

1

.

Crucially, study participants didn't notice they had been influenced by the chatbots. Even when told that AI could bias their viewpoints, the results remained unchanged. "At the moment, we don't know how to prevent this," says Sterling Williams-Ceci, an information scientist at Cornell University

1

. This unconscious adoption of AI-generated perspectives could eventually reduce diversity of political views and erode unique individual thinking patterns.

Source: CNET

Source: CNET

Societal Implications of AI Extend to Scientific Innovation

The societal implications of AI reach beyond everyday communication into scientific research itself. "We might lose out on a crazy idea that seems outrageous, but turns out to be the exact thing we need for a breakthrough," warns Oliver Hauser, who researches economics and AI at the University of Exeter

1

. A paper published in Nature in January found that scientists who used AI tools for their research tended to focus on a more restricted set of fields than those who did not

1

.

This trend threatens societal pluralism and collective intelligence. "Unchecked, this homogenization risks flattening the cognitive landscapes that drive collective intelligence and adaptability," the authors write

2

. The value of diverse perspectives in reaching sound judgment could diminish as reducing diversity in discourse becomes normalized. "The concern is not just that LLMs shape how people write or speak, but that they subtly redefine what counts as credible speech, correct perspective, or even good reasoning," Sourati explains

2

.

Some Evidence Suggests Human-AI Collaboration Preserves Authenticity

Not all research points toward uniformity. A November preprint posted to arXiv identified groups of writers who preserve "distinctively human stylistic signatures, possibly valuing authenticity over efficiency gains offered by AI assistance," though the study has not yet undergone peer review

1

. The writing of some people becomes more similar to AI models, while others maintain their personal writing styles or develop ones that become even more markedly distinct

1

.

Research by Yun Wan at the University of Houston-Downtown tested whether diverse AI personas could maintain variety in human-AI collaboration. His team created ten unique AI personas with different cultural backgrounds and thinking styles that generated plots for creative stories. Analysis showed the resulting stories had similar diversity to stories written by humans with no AI assistance, though this work also awaits peer review

1

. These findings suggest that how AI tools are designed and deployed may determine whether they flatten or preserve the diversity of ideas that characterizes human expression.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo