NYT Cuts Ties With Writer as AI-Assisted Plagiarism Scandal Exposes Human Responsibility

2 Sources

Share

The New York Times severed ties with freelance writer Alex Preston after he used an AI tool to draft a book review that plagiarized content from The Guardian. While Preston admitted to the editing failure, the incident raises critical questions about human responsibility when using AI in journalism and whether the focus should be on the technology or the writer's oversight.

News article

NYT Severs Ties Following Book Review Plagiarism Incident

The New York Times has cut ties with Alex Preston, an accomplished freelance writer, after discovering that a January book review he authored contained passages strikingly similar to work previously published in The Guardian

1

. The review of "Watching Over Her" by Jean-Baptiste Andrea bore remarkable similarities to a review by Christobel Kent published in The Guardian last August. A vigilant reader first alerted NYT to the issue, prompting an internal investigation that revealed Preston had used an AI tool to help draft the piece

1

.

Preston, who has written extensively for major publications including the Financial Times and has six novels under his belt, admitted he "made a serious mistake" and was "hugely embarrassed" by the incident

1

. An NYT spokesperson emphasized that "reliance on AI and inclusion of unattributed work by another writer is a serious violation of The Times's integrity and fundamental journalistic standards"

1

. The similarities were substantial, with Preston's text mirroring Kent's description of the novel's characters almost verbatim.

The Core Issue: Human Responsibility Over AI Capabilities

While the scandal centers on AI in journalism, the real transgression was human plagiarism, not the technology itself

2

. The AI tool generated text from its training data without intent to plagiarize—it was Preston's responsibility to edit out the problematic sections, which he failed to do

2

. This editing failure underscores a critical point: when writers outsource tasks to AI, they remain accountable for the final output. The incident illustrates how even seasoned writers can be lulled into letting their guard down when using technology prone to cobbling together other people's work without attribution

1

.

According to the editor's note dated March 30, Preston claimed he didn't use AI in previous NYT reviews, and the paper's investigation "found no issues in those pieces"

1

. This suggests the problem wasn't systematic but rather a specific lapse in judgment and oversight.

Growing Ambient Paranoia in Newsrooms

This case adds to mounting journalism scandals involving AI content in journalism. Last month, Ars Technica fired a senior tech reporter after he accidentally included AI-fabricated quotes in an article, claiming the error arose after asking an AI tool to generate notes

1

. Earlier this month, speculation swirled around a piece in NYT's "Modern Love" column that readers accused of sounding "EXACTLY like AI slop"

1

.

The Atlantic recently published findings from a study using AI detection software that revealed opinion sections at outlets like NYT and The Wall Street Journal were six times more likely to contain AI-generated prose than news articles, suggesting all had likely published AI-written content at some point

1

. The author of the questioned Modern Love column admitted to using ChatGPT as a "collaborative editor" for "inspiration and guidance and correction"

1

.

What This Means for Journalism's Future

The Preston incident matters because it exposes the tension between technological assistance and journalistic integrity. As AI tools become more sophisticated and accessible, newsrooms face the challenge of establishing clear guidelines about acceptable use while maintaining editorial standards. The question isn't whether AI should be banned from journalism, but how writers and editors can use it responsibly without compromising attribution and originality. Readers should watch for how major publications develop policies around AI assistance and whether transparency about AI use becomes standard practice in bylines and disclosures.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo