New York Times Fires Writer After AI Tool Plagiarizes Guardian Review

3 Sources

Share

The New York Times severed ties with accomplished freelance writer Alex Preston after he used an AI tool to draft a book review that plagiarized content from The Guardian. A reader spotted the similarities, sparking an investigation that revealed Preston failed to edit out AI-generated text lifted from another critic's work. The incident underscores mounting concerns about AI's presence in newsrooms and the blurred lines between human oversight and technological assistance.

New York Times Drops Freelance Writer Over AI-Assisted Plagiarism

The New York Times terminated its relationship with freelance writer Alex Preston after he admitted to using an AI tool to draft a book review that contained plagiarized content from The Guardian

1

. The incident came to light when a vigilant reader noticed striking similarities between Preston's January 2025 review of "Watching Over Her" by Jean-Baptiste Andrea and a review of the same book by Christobel Kent published in The Guardian last August

1

.

Source: Futurism

Source: Futurism

Following the alert, the New York Times launched an investigation that led Preston to confess he had used an AI tool to assist in producing the piece but failed to catch sections pulled directly from Kent's work

1

. In a statement to The Guardian, where he has previously contributed, Preston said he was "hugely embarrassed" and had "made a serious mistake"

2

.

Deep Similarities Expose Human Plagiarism and Editing Failure

The plagiarism was evident in passages describing the book's characters. Kent's original Guardian review read: "But the novel is also rich in smaller characters, from the lazy Machiavellian Stefano to hardworking Vittorio, whose otherworldly twin brother Emmanuele is prone to speaking in tongues and dressing up in ragtag begged-and-borrowed uniforms"

1

. Preston's version closely mirrored this: "The novel is also rich in secondary characters, from the lazy, Machiavellian Stefano to Mimo's childhood friend and fellow craftsman Vittorio and Vittorio's otherworldly twin, Emanuele, who speaks in tongues and dresses in scavenged uniforms"

1

.

Source: Fast Company

Source: Fast Company

An NYT spokesperson emphasized that "reliance on AI and inclusion of use of unattributed work by another writer is a serious violation of The Times's integrity and fundamental journalistic standards"

1

. An editor's note dated March 30 now precedes the review online, and the investigation found no issues in Preston's previous pieces for the publication

1

2

.

Growing Scrutiny of AI Content Across Journalism

What makes this case particularly perplexing is that Alex Preston is an accomplished author with six novels and extensive bylines in major publications including the New York Times, The Guardian, and the Financial Times

1

. The incident illustrates how even seasoned professionals can stumble when using AI tools prone to hallucinating and cobbling together content without attribution

1

.

This isn't an isolated case among journalism scandals. Last month, Ars Technica fired a senior tech reporter after he accidentally included AI-fabricated quotes in an article, an error he attributed to asking an AI tool to generate notes

1

. Such incidents have fueled ambient paranoia about AI's presence in newsrooms, with particular focus on the New York Times

1

.

Earlier this month, speculation swirled around a piece in the paper's "Modern Love" column that readers accused of sounding "EXACTLY like AI slop"

1

. Days later, The Atlantic published findings from a recent study using AI detection software showing that opinion sections at outlets like the New York Times and The Wall Street Journal were six times more likely to contain AI-generated prose than news articles

1

. The author of the questioned column admitted to using AI chatbots like ChatGPT as a "collaborative editor" for seeking "inspiration and guidance and correction"

1

.

The Real Culprit: Human Oversight, Not Technology

Some observers argue the focus should remain squarely on human responsibility rather than the technology itself. As one commentary noted, the crime was plagiarism—a uniquely human failing—not the use of AI

3

. The AI generated words from its training data without intent to plagiarize; it was Preston's job to edit those sections out, and he failed to do so

3

.

Source: ET

Source: ET

For book critics and newsrooms alike, this incident raises urgent questions about ethical standards and the boundaries of acceptable AI assistance. As AI tools become more sophisticated and accessible, the line between legitimate editorial support and journalistic integrity violations grows increasingly blurred. The short-term impact is clear: heightened vigilance from editors and readers, with publications likely implementing stricter guidelines around AI disclosure. Long-term, the industry faces a reckoning about what constitutes authentic criticism and whether attribution standards can keep pace with technology that seamlessly blends sources. Readers should watch for how major publications update their policies on AI tool usage and whether similar cases emerge as scrutiny intensifies across the media landscape.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo