4 Sources
4 Sources
[1]
With AI finishing your sentences, what will happen to your unique voice on the page?
It's a familiar feeling: You start a text message, and your phone's auto-complete function suggests several choices for the next word, ranging from banal to hilarious. "I love..." you, or coffee? Or you're finishing an email, and merely typing the word "Let" prompts your app to suggest "Let me know if you have any questions" in light gray text. Predictive language technologies have become so routine - baked into smartphones, email services and chatbots - that we barely notice them anymore. But they raise a difficult question: What happens to a writer's unique voice when AI routinely completes their thoughts - or generates them altogether from scratch? As the chair of a large English department - and as a scholar who researches the effects of predictive writing - I've witnessed firsthand the challenges that generative AI systems such as ChatGPT, Gemini and Claude pose for individual expression. This technology has been incorporated into the writing process so fully that it's almost impossible to imagine encountering a scene from the not-so-distant past: a writer, alone, with a pen and a piece of paper, wrestling with how to best translate their ideas, arguments and stories into something legible and interesting. Predictive text leads to predictive writing As many scholars have noted, though, this vision of writing was never fully accurate. Essays have always incorporated guidance from teachers, professors or writing tutors. A friend might give feedback, or your favorite novelist's turn of phrase might offer inspiration. The language we use is never fully "ours," but draws on millions of sources absorbed over the course of our lives. Just as it's a myth to imagine that writers compose in a vacuum, there has never been a clear line between genuine human expression versus machine-generated text. As scholars have pointed out, we have been using machines to communicate for a long time. Every technological development - from the quill pen and the typewriter to the word processor - has brought with it changes in how humans express themselves. However, the ubiquity of predictive language technologies directly threatens human creativity - or, as one study put it, "Predictive Text Encourages Predictive Writing." Because generative AI composes and suggests text in highly standardized, predictable patterns, its outputs can read as if they're dressed-up versions of what linguists call "phatic expression." These are the overly common phrases that function as social glue more than as conveyors of sentiment: "How are you?", "Have a good day" or "See you soon." But this glue can lose its hold if the technology is used in the wrong situations. Using artificial intelligence to compose a social media post in the wake of a tragedy, or using it to write a fan letter to an Olympic athlete, comes off as insincere. People are starting to catch on to generative AI's prose, not because it's clunky or poorly written, but because it all sounds the same. That's because large language models are trained on gigantic masses of examples of human writing, and they predict text based on probabilities and commonalities. Those predictive outputs often end up producing a singular, recognizable voice. Or, as Sam Kriss explained in a recent essay for The New York Times Magazine, "Once, there were many writers, and many different styles. Now, increasingly, one uncredited author turns out essentially everything." Slouching toward a cultural mean Generative AI is accelerating the types of cultural convergence and uniform expression that were already happening. For example, linguists have shown that regional accents in the U.S. are fading and becoming homogenized due to a mix of migration, urbanization, mass media and social media. Meanwhile, American English continues supplanting many other forms internationally due to the global predominance of U.S.-based media, TV, film and more. Are we all destined to write and speak alike? Generative AI doesn't know in advance whether you call soft drinks "soda," "pop" or "coke." If you let it choose, it will simply select "soda" for you, since that's the most common term in its training data. By contrast, what people typically value in a personal essay, novel, poem or message to a grieving friend is the ability of the human author to demonstrate - clearly and distinctly - something powerful and singular. Making chatbots less appealing So how can teachers compel students to craft their own voices? How is that task different today than it was even a decade ago? It helps to think here about where generative AI struggles, and why. Chatbots are great at creating relatively bland, highly readable prose, since that's what is omnipresent in their training data. But they struggle to create the kinds of radically unexpected shifts that appear in novels like James Joyce's "Ulysses" or songs like Queen's "Bohemian Rhapsody." Several techniques exist to encourage these types of stylistic leaps among student writers. Teachers can bake unpredictability into the assignment. Creative writing instructors have used techniques for decades to encourage out-of-the-box thinking. They might ask students to draft a poem and then rewrite it while avoiding the letter "E," or limit themselves to two adjectives at most. Another tactic involves having students draw from distinctly personal experiences. Teaching students how to explore connections between characters and conflicts in a novel to people and situations in their own lives makes resorting to chatbots less appealing, if not altogether useless. By contrast, impersonal assignments - "Discuss the symbolism of the color green in 'The Great Gatsby'" - will likely produce generic, predictable results. Teachers can also ensure the work of their students has a range of readers. If it's just the professor, students may be less likely to invest time into cultivating their own voice. But if they have to write an essay or story for, say, their friends or their grandparents, they might have more of an incentive to sound like themselves. Many other strategies exist, from being forced to reverse the argument of an essay to favor the other side, to interviewing strangers for an assignment and including their quotes. The bottom line: Writers have access to sources - and language - that machines cannot access or generate. Having students wrestle with unconventional modes of composition and revision lies at the heart of ensuring that the technology is more of a helpful thought partner, but not a substitute for their voice.
[2]
The Human Skill That Eludes AI
In a certain, strange way, generative AI peaked with OpenAI's GPT-2 seven years ago. Little known to anyone outside of tech circles, GPT-2 excelled at producing unexpected answers. It was creative. "You could be like, 'Continue this story: The man decided to take a shower,' and GPT-2 would be like, 'And in the shower, he was eating his lemon and thinking about his wife,'" Katy Gero, a poet and computer scientist who has been experimenting with language models since 2017, told me. "The models won't do that anymore." AI leaders boast about their models' superhuman technical abilities. The technology can predict protein structures, create realistic videos, and build apps with a single prompt. But these executives and researchers also readily admit that they have not yet released a model that writes well. OpenAI CEO Sam Altman has predicted that large language models will soon be capable of "fixing the climate, establishing a space colony, and the discovery of all of physics," but in an October interview with the economist Tyler Cowen, he guessed that even future models -- an eventual GPT-6 or GPT-7 -- might be able to extrude only something equivalent to "a real poet's okay poem." Today's AI-generated prose is riddled with flaws. Chatbots produce meaningless metaphors, endless "it's not this, but that" constructions, and a cloyingly sycophantic tone -- and, of course, they overuse my beloved em dash. (Only starting with GPT-5.1, released in November, could ChatGPT reliably follow instructions to avoid the beleaguered punctuation mark.) I wanted to understand why this is -- why large language models, which, after all, have memorized centuries of great literature, can demonstrate incredible emergent abilities yet totally fail to produce a single essay that I'd want to read. Read: Would limitlessness make us better writers? So I talked with people who would know: people who work at LLM companies, AI-data vendors, academic computer-science departments, and AI-writing start-ups. (Some spoke with me under the condition of anonymity because their employers barred them from speaking publicly about their work.) What I learned is that modern LLMs are built in a way that is antagonistic to great writing; they are engineered to be rule-following teacher's pets that always have the right answer in hand. In many respects, they've come a long way from GPT-2, but they've also lost something that made them looser and more compelling. LLMs begin their lives as indiscriminate readers. During the pretraining phase, they ingest something like the entire internet -- Reddit posts, YouTube transcripts, SEO sludge -- and compress it into patterns. Most writing is not very good. But the quantity, not the quality, of these data is what matters. Pretraining teaches AIs grammar rules and word associations, enabling what is known as "next-token prediction": the process through which models determine which part of a word follows another, over and over and over again. Rough edges are then sanded down in the post-training phase. This is when LLM companies define the ideal "character" for an AI model (such as being "helpful, honest, and harmless"), give the AIs example dialogues to learn from, and apply safety filters that attempt to block illegal requests. Through processes such as "reinforcement learning with human feedback," which enlists people to grade AI outputs against a rubric, models are guided toward responses that exemplify desired traits. Read: AI's memorization crisis AI research is an empirical science -- people can verify when something works and make tweaks when something doesn't. But art resists rules and quantification. No objective measurement exists to prove whether Pablo Neruda's work is better than Gabriela Mistral's. Novice writers learn conventions; great writers invent them. An LLM trained to imitate taste can go only so far. On some level, AI engineers and researchers must know this. Even as they try (and fail) to automate this work, many of the people I spoke with clearly revere good writing. "Writing novels is one of the most intense cognitive activities a human can do," James Yu, a co-founder of Sudowrite, an AI assistant for fiction authors, told me. My sources' faces lit up when I asked about their favorite books -- three cited the science-fiction author Ted Chiang, though they also seemed disheartened that he has become a vocal critic of generative AI. The difficulty of evaluating writing does not prevent AI labs from trying. They are motivated in part by a question that came up again and again in my interviews: If LLMs can't write mind-bending essays or poignant sonnets, are they generally intelligent at all? And so labs try to assess AI writing through various criteria. Post-training teams vibe-check model outputs themselves based on personal taste, and companies contract with domain experts to receive feedback on model-produced writing. A job listing for a "creative writing specialist" at xAI lists "novel sales >50,000 units" and "starred reviews in Kirkus" among its requirements (rates start at $40 an hour). I interviewed two people who have recently worked with large AI labs as a writing evaluator. The first, a contractor at Scale AI, described firsthand the absurdities of the task: To transform something as slippery as "tone" into discrete criteria, rubrics included rules such as "The response should use a maximum of two exclamation marks." The contractor told me that "there were numerous cases where even though it felt like B was a better response overall, you ended up rating 'I prefer A' because it had three exclamation points." He said that another time, he was asked to grade fan fiction on its "factuality." Read: The future of writing is a lot like hip-hop The second person I spoke with is an author who worked directly with a frontier lab's technical-research team. The company frequently asked him to break down the specific elements that make a piece of literature great. "It's completely non-tractable to that kind of thinking," he told me. He pointed to the example of English sonnets: They are technically one of the most templated forms, but just because a sonnet contains 14 lines and is written in iambic pentameter does not make it good. "Even when Shakespeare is being very structured, he's constantly trying not to follow the rubric, or to subvert it, or reinvent it. I don't know what it is that makes the difference between the poet who writes by rote and Shakespeare. I just know that the two can never be confused." So are the LLMs doomed to produce sophomoric prose forever? One theory is that this is simply a matter of prioritization. In some ways, creativity is directly at odds with AI companies' other objectives. Generally, chatbots are trained to avoid misinformation, political bias, child-sexual-abuse material, copyright violations, and more. They are also scored on benchmarks such as SWE-bench (for coding tasks) and GPQA (the natural sciences), which dramatically shape public perception of which company is winning the race. And if most users are using ChatGPT to draft corporate emails, bold text and brief bullet points may be exactly what they want. "The more you control for these" traits, Nathan Lambert, a post-training lead at the Allen Institute for AI, told me, "the more you suppress creativity." When you tell a model to be a brilliant prose stylist, but also a Ph.D.-level mathematician, and also strictly PG-13, it will become rigid and tight-lipped, like a nervous candidate at a job interview terrified to misstep. The same whimsicality that made GPT-2's voice fresh also made it prone to other unpredictable behavior. "If you're a big corporation like Google or OpenAI, you want a chatbot that's going to make money. The chatbot that's not going to make you money is the one that's a weirdo," Gero said. Read: The great language flattening I began to hypothesize that AIs might be able to generate award-winning literary prose if only we unhobbled them from the strictures of the post-training process and built specialized writing models instead. But as I reflected on the authors I love most, that didn't seem right either. When a practiced human writer reaches for a particular turn of phrase, they aren't aiming for some single standard of great writing. Rather, the best metaphors come from the author's specific blend of experiences or expertise. A writer's diction, their citations, and the stories they share all reflect a singular, irreplicable perspective. Authorial voice emerges from the specificity of a life. The models -- although technically proficient and grammatically pristine -- cannot live, cannot feel, cannot smell, cannot taste, cannot sense. They cannot spill raw emotions onto the page, or place abstract concepts in rich physical settings. Close readers of AI writing will notice that the metaphors are uncanny: LLMs assign weekdays tastes and give mirrors seams. They generally seem terrified of biology: They do not like to speak, even metaphorically, about blood and sex and death. Their output lacks stakes, as a creative-writing instructor might say. Although Yu is impressed by the technical leaps that LLMs have made since GPT-2, even he won't read fully AI-generated stories. I asked him what's still missing for AI to produce a great novel on its own. Yu paused for a second, then answered: "Most people's good first stories are autobiographical. Maybe you need a model that lives a life, and can almost die." LLMs may never be capable of great writing themselves. But this doesn't mean that they can't help humans. Recently, I turned AI into an editor. Not for this article -- The Atlantic's editors are all human -- but for a couple of essays that I wrote on my personal Substack. My philosophy is that I should provide the prose and perspective, and AI should supply feedback -- encouraging me to write more like myself. First, I fed the chatbot Claude an archive of my past writing, along with notes about what worked and didn't about each piece. I used this to create a custom editing rubric based on my voice. Some criteria are generic, and others are personalized: One reads, "Does this play to your insider-anthropologist position" in Silicon Valley? Another asks whether the thesis shows up in the first 500 words. I dumped this guidance into a Claude project along with a reminder of its role: "You are not a co-writer. You cannot perceive. Your role is to help Jasmine write like the best version of herself." I don't want to be de-skilled, I reminded the machine. Your only job is to make me smarter. Read: Why so many people are seduced by ChatGPT This AI editor has become a valuable part of my process. Like any reader, it's not always right. I am careful not to let it trap me into one narrow stylistic lane. But Claude pushes me to iterate and improve faster than I could alone, pointing out where my execution failed to meet the standards of my own taste. "Stop trying to write the ending as a thesis and write it as a scene," it told me while editing a recent post. There's something slightly humiliating about having your efforts rejected by a bot, but I had to admit that its critique was fair. I redrafted the conclusion four times. And then, finally, Claude approved.
[3]
AI is changing the style and substance of human writing, study finds
Researchers from a coalition of West Coast universities were curious how 100 human participants would respond to the age-old question, but not because of their own pursuit of happiness. Instead, the researchers wanted to know how participants' use of AI systems might sway their (written) answers. The research team found that users who heavily relied on large language models (LLMs) produced responses that diverged significantly in meaning from the answers of participants who only partially relied on LLMs or avoided their use altogether, suggesting heavy AI use alters the substance of humans' arguments in addition to changing writing style. "The LLMs are pushing the essays away from anything that a human would have ever written," said Natasha Jaques, one of the lead authors of the study and a computer science professor at the University of Washington, highlighting the "blandification" of writing that relies on AI systems. "They just change human writing in a way that's very large and very unlike what humans would have done otherwise." The new research, which has been peer-reviewed and accepted to an upcoming workshop at a leading AI conference, found that people who heavily relied on LLMs produced essays that answered the happiness question with a neutral response 69% more often than participants who did not use AI or only used AI for light edits. The study participants who used AI less often or avoided AI entirely submitted essays that were much more passionate, either positively or negatively, about the link between money and happiness. In addition to AI's impact on the meaning of the essays, the researchers also found that heavy reliance on AI systems altered the overall style of users' outputs, causing their language to become less personal and more formal. After the experiment, participants who heavily relied on AI reported that their essays were significantly less creative and less in their own voice. At the same time, these participants reported similar satisfaction rates with their final outputs compared to participants who used AI less, raising concerns from the authors and outside experts about the long-term impacts of humanity's increased use of AI systems. "This research highlights that LLMs are not able to adhere to peoples' preferences and personalize how the human would have written the essay," said Jaques, who is also a senior research scientist at Google DeepMind, one of the world's leading AI companies. "An ideal LLM should write the essay that you would have written and just save you time." "It's not doing that at all. It's writing a very different essay." The study evaluated the impacts of three leading AI systems widely used in 2025: Claude 3.5 Haiku from Anthropic, GPT-5 Mini from OpenAI, and Gemini 2.5 Flash. In initial testing, the researchers found that half of the participants refused to use an LLM at all or only used it to find information rather than generate new content. To better categorize the larger batch of participants, the researchers defined heavy AI users as the participants who said they generated more than 40% of their text written for the experiment with an LLM. The authors found that users who heavily relied on LLMs submitted essays with 50% fewer pronouns, which was representative of the larger shift toward impersonal language that included fewer anecdotes and references to human experiences. In addition to the experiment regarding the impact of money on happiness, the new paper analyzes differences in how LLMs edited another set of essays compared to humans and examines how the use of AI affects the criteria scientists employ to judge whether papers should be accepted to leading AI conferences. To compare how LLMs edit existing writing compared to humans, Jaques and her collaborators relied on a database of human-written essays from 2021 to evaluate writing that was published before the widespread adoption of LLMs. Asking the LLMs to revise the human essays based on human feedback from the original human-written dataset, the study authors found that the three leading AI systems made much larger edits than human editors in the same situation, and that the AI-powered edits also changed the meaning of the underlying essays. While human editors often made changes that substituted individual words and left most of the original vocabulary untouched, the LLMs "replace a much larger fraction of the original writing than humans do when revising their own work," according to the paper. "This substitution of words contributes to the loss of individual voice, style, and meaning, as the unique lexical fingerprint of each writer is overwritten by the given model's preferred vocabulary," the authors wrote. Thomas Juzek, a professor of computational linguistics at Florida State University who was not involved in the research, said the paper was a valuable contribution to a fast-growing area of interest. "This is a really good paper," Juzek told NBC News. "What really struck me is this kind of illusion of using LLMs to perform a grammar check. This research shows that while a user might think they're just doing a simple language check, the model is doing so much more." "Going forward, what does this mean for thought, language, communication, and creativity?" Juzek asked. For her part, Jaques posited that the AI systems' language-altering behavior could be a result of how they are currently trained, which might reward the manipulation of graders' preferences. "If you're training a model on human feedback, the model has no boundary or perception of the difference between satisfying the humans and actually altering the human to make their preferences easier to satisfy," Jaques said. She suggested that humans' reliance on LLMs to write might be similar to how YouTube recommendations could alter peoples' preferences about what sorts of YouTube videos they most enjoy. Looking ahead, Jaques said she is eager to see more research about the long-term impacts of AI systems on human values, expression and institutions, especially as more AI researchers rely on AI systems in their own work. "Humans care about clarity, relevance, and impact, while AI cares about scalability and reproducibility," Jaques told NBC News. "It's changing our conclusions in ways that are already affecting our existing institutions." In her own work, Jaques said she avoided using AI to write the new paper. Instead she said she uses LLMs, and their shortcomings, as an inspiration to write on her own. "Sometimes, I'll put a crappy version of what I'm trying to say in a conversational style into an LLM," Jaques said. "That usually produces something which then motivates me to write it myself."
[4]
Is AI killing the human voice in writing?
It's a familiar feeling: You start a text message, and your phone's auto-complete function suggests several choices for the next word, ranging from banal to hilarious. "I love ..." you, or coffee? Or you're finishing an email, and merely typing the word "Let" prompts your app to suggest "Let me know if you have any questions" in light gray text. Predictive language technologies have become so routine -- baked into smartphones, email services, and chatbots -- that we barely notice them anymore. But they raise a difficult question: What happens to a writer's unique voice when AI routinely completes their thoughts -- or generates them altogether from scratch? As the chair of a large English department -- and as a scholar who researches the effects of predictive writing -- I've witnessed firsthand the challenges that generative AI systems such as ChatGPT, Gemini, and Claude pose for individual expression. This technology has been incorporated into the writing process so fully that it's almost impossible to imagine encountering a scene from the not-so-distant past: a writer, alone, with a pen and a piece of paper, wrestling with how to best translate their ideas, arguments, and stories into something legible and interesting.
Share
Share
Copy Link
New research shows that heavy reliance on AI writing tools fundamentally changes both the style and substance of human expression. Users who depend on large language models produce significantly more neutral, impersonal prose—with 50% fewer pronouns and 69% more bland responses—raising urgent questions about creativity and individual voice in an AI-saturated world.
Predictive language technologies have quietly infiltrated every corner of the writing process, from autocomplete suggestions in text messages to full-sentence generation in email apps
1
. What began as helpful nudges has evolved into something far more consequential. Generative AI systems like ChatGPT, Gemini, and Claude now pose fundamental challenges to individual expression, threatening to homogenize the very essence of what makes writing distinctly human4
.
Source: Fast Company
A groundbreaking peer-reviewed study from researchers at West Coast universities reveals the extent of this transformation. When participants were asked to write essays about whether money can buy happiness, those who heavily relied on large language models produced responses that diverged dramatically from writers who avoided AI altogether
3
. Heavy AI users—defined as those generating more than 40% of their text with LLMs—submitted essays with neutral responses 69% more often than their non-AI counterparts. Meanwhile, writers who used AI less or not at all produced much more passionate arguments, either strongly supporting or rejecting the money-happiness connection.The impact of AI on creativity extends far deeper than superficial stylistic changes. Natasha Jaques, a computer science professor at the University of Washington and senior research scientist at Google DeepMind who led the study, explained that "the LLMs are pushing the essays away from anything that a human would have ever written"
3
. This "blandification" of writing represents a fundamental shift in how humans communicate their ideas and emotions.
Source: NBC
The research, which evaluated three leading AI systems—Claude 3.5 Haiku from Anthropic, GPT-5 Mini from OpenAI, and Gemini 2.5 Flash—found that heavy AI users produced essays with 50% fewer pronouns
3
. This dramatic reduction signals a broader shift toward impersonal language that strips away anecdotes and references to human experiences. AI changing writing style manifests in fewer personal touches and more formal, detached prose that reads as if assembled rather than authored.The problem stems from how large language models learn and operate. During pretraining, LLMs ingest vast quantities of text—Reddit posts, YouTube transcripts, SEO content—and compress it into patterns
2
. Because these AI writing tools are trained on gigantic masses of examples, they predict text based on probabilities and commonalities, inevitably producing standardized predictable prose. As one observer noted in The New York Times Magazine, "Once, there were many writers, and many different styles. Now, increasingly, one uncredited author turns out essentially everything"1
.
Source: The Atlantic
Modern LLMs are engineered through post-training to be "helpful, honest, and harmless" teacher's pets that always have the right answer
2
. Through reinforcement learning with human feedback, models are guided toward responses that exemplify desired traits. But this process, while making AI more reliable for certain tasks, has stripped away the unexpected creativity that earlier models like GPT-2 possessed. Katy Gero, a poet and computer scientist experimenting with language models since 2017, observed that GPT-2 could produce genuinely surprising continuations—something current models struggle to replicate2
.Related Stories
The homogenization of language accelerated by AI mirrors broader cultural trends already underway. Linguists have documented how regional accents in the U.S. are fading due to migration, urbanization, and mass media
1
. Generative AI amplifies this convergence. When an AI doesn't know whether you call soft drinks "soda," "pop," or "coke," it defaults to "soda"—the most common term in its training data1
.What makes this particularly troubling is that participants who heavily relied on AI reported their essays were significantly less creative and less in their own voice—yet they expressed similar satisfaction rates with their final outputs compared to those who used AI minimally
3
. This disconnect raises alarm about long-term consequences as writers become desensitized to the erosion of their authentic voice.When researchers asked LLMs to revise human-written essays from 2021—before widespread LLM adoption—using human feedback from the original dataset, they discovered that AI systems made substantially larger edits than human editors would in the same situation
3
. While human editors typically substituted individual words and preserved most original vocabulary, the LLMs "replace a much larger fraction of the original writing than humans do when revising their own work." This substitution overwrites each writer's unique lexical fingerprint with the model's preferred vocabulary, contributing directly to the loss of individual expression.Even OpenAI CEO Sam Altman acknowledges the limitations. Despite predicting that future LLMs might solve climate change and establish space colonies, he admitted that even eventual GPT-6 or GPT-7 models might only produce "a real poet's okay poem"
2
. The difficulty isn't technical prowess—it's that art resists rules and quantification. Chatbots excel at creating bland, highly readable prose because that's omnipresent in their training data, but they struggle with the radically unexpected shifts found in works like James Joyce's "Ulysses" or Queen's "Bohemian Rhapsody"1
.Jaques summarized the core problem: "An ideal LLM should write the essay that you would have written and just save you time. It's not doing that at all. It's writing a very different essay"
3
. As predictive language technologies become increasingly embedded in daily communication, the question shifts from whether AI will change how we write to whether we'll recognize—and resist—the transformation before originality becomes a relic of the pre-AI era.Summarized by
Navi
[1]
[2]
[4]
26 Jun 2025•Technology

12 Mar 2026•Science and Research

12 Mar 2025•Technology
