New York Times drops critic over AI use as ethical concerns shake publishing industry

Reviewed byNidhi Govil

9 Sources

Share

The New York Times severed ties with freelance journalist Alex Preston after he used AI to write a book review that incorporated unattributed material from the Guardian. The incident highlights mounting ethical concerns about AI in writing as undisclosed use of artificial intelligence spreads across major publications, threatening reader trust and raising fundamental questions about authorship in the digital age.

New York Times AI Controversy Exposes Industry-Wide Problem

The New York Times has cut ties with freelance journalist and author Alex Preston after discovering he used AI in writing to craft a book review that pulled language from a Guardian piece without attribution. A reader flagged similarities between Preston's January 2026 review of Jean-Baptiste Andrea's novel "Watching Over Her" and Christobel Kent's August Guardian review of the same book

5

. The Times called Preston's "reliance on AI and his use of unattributed work by another writer" a clear violation of its journalism standards

5

.

Source: The Conversation

Source: The Conversation

Preston told the Guardian he was "hugely embarrassed" and "made a serious mistake in using an AI tool on a draft review I had written, and I failed to identify and remove overlapping language from another review that the AI dropped in"

5

. His apology raises troubling questions about disclosure and the extent of AI's role in creative fields. The incident reveals how generative AI in creative writing can blur lines between inspiration and plagiarism and AI tools, particularly when writers fail to scrutinize what their AI assistants produce.

Undisclosed Use of Artificial Intelligence Spreads Across Major Publications

The Preston case isn't isolated. Research by Stony Brook University computer science professor Tuhin Chakrabarty and six colleagues found that AI detection tools flagged likely AI use across U.S. press outlets, including in opinion sections of The New York Times, The Wall Street Journal, and The Washington Post

3

. A "Modern Love" column by Kate Gilgan drew scrutiny after writer Becky Tuch posted an excerpt that "reads EXACTLY like AI slop"

3

. When Chakrabarty ran the column through Pangram Labs' AI detector, it estimated more than 60 percent was AI-generated

3

.

Source: The Atlantic

Source: The Atlantic

Gilgan acknowledged utilizing AI as "a collaborative editor and not as a content generator," prompting ChatGPT, Claude, Copilot, Gemini, and Perplexity for "inspiration and guidance and correction"

3

. This defense highlights a gray area in publishing industry standards: where does acceptable assistance end and problematic AI-generated content begin? The Times' ethical-journalism handbook mandates that "substantial use of generative AI" be clearly disclosed to readers, but what constitutes "substantial" remains undefined

3

.

Ethical Concerns Challenge Core Values of Literary Criticism and Authorship

The role of literary criticism extends far beyond summarization. "Good criticism thrives in the complexity of its environment," writes critic Jane Howard. "Each review sits in conversation with every other review of a piece of art, with every other review the critic has written"

1

. The critic's emotional and intellectual engagement with art is intrinsic to their role as mediator between artist and audience—a deeply human function that AI cannot replicate

1

.

Source: NYMag

Source: NYMag

When critics use AI, they break an unspoken pact with both writers and readers. Writers assume reviewers have taken time to read and carefully consider their work. Readers trust that published assessments reflect genuine human response and perspective filtered through individual experience

1

. Australian literature academic Julieanne Lamond explains that "when we write reviews we have to do it 'naked'—as individual readers, with a public to judge our judgements"

1

.

Erosion of Trust Between Authors and Readers Accelerates

The publishing industry faces an accelerating crisis. Last week, Hachette canceled U.S. publication of the Shy Girl novel after readers flagged prose resembling AI-generated text

3

. Author Andrea Bartz warns of "a rapid erosion of trust between authors and readers" as AI models improve

4

. She notes that with fine-tuning, chatbots can eerily mimic published writers' word choices and grammatical patterns. Author James Frey has openly admitted using AI and boasted, "I have asked the AI to mimic my writing style so you, the reader, will not be able to tell what was written by me and what was generated by the AI"

4

.

Distinguishing human-written from AI-generated text grows harder as models evolve. AI detection tools remain unreliable, producing false positives and varying results across platforms

3

. Pangram's CEO Max Spero acknowledged both challenges exist, warning that percentage estimates of AI content are difficult to determine with certainty

3

.

Ghostwriting Debates Resurface in AI Era

The AI debate echoes century-old controversies over ghostwriting, revealing persistent discomfort with words not belonging to the credited author. The term "ghostwriting" first appeared in a 1908 newspaper article describing an anonymous writer paid $5,000 to help a high-society woman write a book

2

. Even when consensual and compensated, ghostwriting occupies an ethical gray area. A 1953 article noted that "forgery" and "ghostwriting" could be used interchangeably by scholars

2

.

High-end ghostwriters collect mid-six-figure fees, with Prince Harry's ghostwriter J.R. Moehringer reportedly scoring a $1 million advance

2

. Generative AI promises to democratize this service, becoming "the ghostwriter for the masses"

2

. Yet concerns about originality and authenticity persist whether assistance comes from humans or machines.

What Readers and Writers Should Watch

The publishing industry must establish clear standards for disclosure before reader trust collapses entirely. Most readers want transparency when AI has been used, and they quickly recognize telltale patterns of large language models

4

. Writers face a stomach-turning new question: "Did you actually write this?"

4

. As AI-generated content proliferates, every author risks suspicion, and every book review becomes subject to scrutiny. The Preston incident demonstrates that even established journalists at prestigious publications aren't immune to the temptation—or the consequences—of undisclosed AI use.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo