AI influence drives thought homogenization, flattening how millions write and reason

Reviewed byNidhi Govil

3 Sources

Share

Research published in Trends in Cognitive Sciences reveals that Large Language Models like ChatGPT are creating sameness in human writing and thinking. Scientists warn that AI influence extends beyond direct users, threatening cognitive diversity and collective intelligence as one-third of Americans now use these tools regularly.

AI Influence Reshapes Human Expression Across Millions of Users

A troubling pattern has emerged in how people write, think, and communicate since ChatGPT launched in 2022. Research published in Trends in Cognitive Sciences reveals that Large Language Models (LLMs) are driving thought homogenization across human discourse, creating a sameness that threatens cognitive diversity

1

. The study, led by Zhivar Sourati, a computer scientist at the University of Southern California, analyzed more than 130 studies across linguistics, computer science, and psychology to understand how AI homogenizing human expression has become a measurable phenomenon

3

.

Source: Nature

Source: Nature

The scale of this issue matters because AI adoption has accelerated dramatically. Pew Research found that one-third of all Americans used ChatGPT last year, double the 2023 figure, while two-thirds of teens now use chatbots, with nearly a third using them daily

2

. Stanford data shows that 78% of organizations reported using AI in 2024, up from 55% in 2023

2

. This widespread Large Language Models impact means millions are now exposed to the same narrow patterns of expression and reasoning.

Source: Gizmodo

Source: Gizmodo

Convergence in How People Think and Communicate

Analysis of Reddit posts, news content, and pre-print studies from both before and after ChatGPT's 2022 launch shows that text published after the platform's release tends to be less stylistically diverse than text from before

1

. This convergence in how people think and communicate stems from how LLMs are trained. "Because LLMs are trained to capture and reproduce statistical regularities in their training data, which often overrepresent dominant languages and ideologies, their outputs often mirror a narrow and skewed slice of human experience," Sourati explains

3

.

Source: CNET

Source: CNET

The erosion of unique individual thinking extends beyond writing styles to actual opinions and reasoning methods. In a study published in Science Advances, participants who used AI helpers to write about sociopolitical topics like the death penalty tended to express attitudes more similar to what the LLMs wrote than did a control group that didn't use AI

1

. Sterling Williams-Ceci, an information scientist at Cornell University, warns this effect could eventually reduce diversity of political views, though the precise impact depends on the leanings various LLMs express

1

.

Standardizing Expressions and Reasoning Without User Awareness

Perhaps most concerning is that study participants didn't notice they had been influenced by the chatbots. Even when participants were explicitly told that AI could bias their viewpoints, the results remained unchanged

1

. "At the moment, we don't know how to prevent this," Williams-Ceci admits, noting that the answer probably isn't as simple as giving users a plain warning

1

.

The risks of human-AI collaboration extend to standardizing expressions and reasoning in ways that limit problem-solving approaches. LLMs use chain-of-thought reasoning, which reflects a linear form of thinking but are incapable of more abstract styles of reasoning that may require leaps in logic

3

. While individuals using LLMs to generate ideas often produce more volume, groups of people actually produce fewer ideas when using LLMs compared to when they simply collaborate and bounce ideas off each other

3

.

Reduced Diversity of Ideas Threatens Collective Intelligence

The reduced diversity of ideas carries significant societal implications for pluralism and collective intelligence. "This value of pluralism is rooted in the long-held principle that sound judgment requires exposure to varied thought," the authors write, warning that "unchecked, this homogenization risks flattening the cognitive landscapes that drive collective intelligence and adaptability"

2

.

Oliver Hauser, who researches economics and AI at the University of Exeter, notes that while AI "helps you write better and makes you more understandable to other people," the collective suffers when adoption becomes widespread

1

. AI's effect on creativity extends to scientific thought as well. A paper published in Nature in January found that scientists who used AI tools for their research tended to focus on a more restricted set of fields than those who did not

1

. "We might lose out on a crazy idea that seems outrageous, but turns out to be the exact thing we need for a breakthrough," Hauser warns

1

.

Training Data Biases Shape What Counts as Credible

The concern extends beyond direct users. "If people around you are interacting with these LLMs and adopting their writing styles, perspectives and reasoning, at some point it would surround you so much that it would seem like the more socially correct way to frame information," Sourati explains

1

. This creates pressure even for non-users to conform to AI-influenced patterns of expression. "The concern is not just that LLMs shape how people write or speak, but that they subtly redefine what counts as credible speech, correct perspective, or even good reasoning," Sourati adds

2

.

Some research suggests resistance is possible. A November preprint reported that human writers evolve in different ways while engaging with AI, with some maintaining their own personal writing style or developing one that becomes even more markedly distinct from the AI's

1

. Yun Wan at the University of Houston-Downtown found that when participants used ten unique AI personas with diverse cultural backgrounds and thinking styles to generate story plots, the resulting stories had a similar level of diversity to stories written by humans with no AI assistance

1

. These findings suggest that how AI systems are designed and deployed may determine whether they flatten or preserve human creativity in the years ahead.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo