2 Sources
2 Sources
[1]
NYT Cuts Ties With Writer as Scrutiny of AI Content Grows
A thudding prose style wasn't the giveaway this time, but accidentally-cribbed work. The NYT was alerted to the issue by a reader who observed that a January review of "Watching Over Her" by Jean-Baptiste Andrea, written by author and journalist Alex Preston, bore remarkable similarities to a review of the same book by Christobel Kent that was published in The Guardian last August. After the NYT launched an investigation, Preston admitted he used an AI tool to help draft the review and failed to spot the sections that were pulled from The Guardian. In a statement to the British newspaper, which Preston has previously written for, Preston said he was "hugely embarrassed" and had "made a serious mistake." "Editors have appended a note to a book review written earlier this year by a freelance critic, who told The Times after publication that he had used an AI tool to assist him in producing the piece," an NYT spokesperson told The Wrap. "This tool produced similarities to a book review published in The Guardian, which our editors' note makes clear. For staff journalists and freelance writers alike, reliance on AI and inclusion of unattributed work by another writer is a serious violation of The Times's integrity and fundamental journalistic standards." The similarities are deep, and come from a passage describing the book's colorful dramatis personae. A portion of the original reads: "But the novel is also rich in smaller characters, from the lazy Machiavellian Stefano to hardworking Vittorio, whose otherworldly twin brother Emmanuele is prone to speaking in tongues and dressing up in ragtag begged-and-borrowed uniforms..." Preston's review: "The novel is also rich in secondary characters, from the lazy, Machiavellian Stefano to Mimo's childhood friend and fellow craftsman Vittorio and Vittorio's otherworldly twin, Emanuele, who speaks in tongues and dresses in scavenged uniforms" -- and so forth. According to the editor's note, dated March 30, Preston claims he didn't use AI in previous reviews for the NYT, and that while conducting the investigation the paper "found no issues in those pieces." It's one of the more baffling cases of an AI contretemps. Preston is an accomplished author with six novels under his belt, and has written heaps for major publications like the NYT, The Guardian, and the Financial Times. But other AI journalism scandals further illustrate that even seasoned writers can be lulled into letting their guard down when using the tech, which is prone to hallucinating and cobbling together other people's work without attribution. Last month, Ars Technica fired a senior tech reporter after he accidentally included AI-fabricated quotes in an article, an error the reporter claims arose after he asked an AI tool to generate notes. Incidents like those have contributed to an ambient paranoia over how AI may be sneaking its way into even the most venerable journalistic institutions, much of which has centered recently on the NYT. Earlier this month, a flurry of speculation rekindled around a piece published in the paper's "Modern Love" column which readers accused of sounding "EXACTLY like AI slop." Days later, The Atlantic published a piece on a recent study using the latest AI detection software that found that the opinion section of outlets like the NYT and The Wall Street Journal were six times more likely to contain AI-generated prose than their news articles, with the upshot that all had likely published AI-written content, unknowingly or otherwise, at some point. In The Atlantic piece, the author of the NYT column in question admitted to using AI chatbots like ChatGPT "collaborative editor" for seeking "inspiration and guidance and correction."
[2]
Plagiarist's the human, not AI!
A freelance writer for the NYT allegedly used AI for a book review. The real issue was plagiarism, not the AI's involvement. The writer failed to edit out AI-generated text. Humans outsource many tasks, but using AI for writing is now seen as scandalous. The focus should be on the human's plagiarism, not the AI's capabilities. NYT reportedly dropped freelance writer Alex Preston for using AI to write a book review. Scandal! Outrage! Pitchforks! One imagines newsrooms are gasping as though someone's fessed up to 'poisoning' the water cooler with LSD. But let's be clear: the crime was not using AI. It was plagiarism. And plagiarism, dear humans, is a uniquely human sport - like billiards and carrom, where you try to cut corners, but with more shame. The AI in question, if any journalist naming and shaming Preston - and, by extension, AI as a species - had bothered to ask it, stands proudly guilty of being... well, intelligent. It generated words, ideas, metaphors, metaphors about metaphors, from the diet it was fed. It did not intend to nick lines written earlier, that too by a human. It was the human's job to edit those bits out. And he bungled it. If you hire a sous-chef to chop onions and he steals onions from the neighbour's pantry, do you blame the knife? No. You blame the sous-chef. Humans outsource everything these days - meals, groceries, driving, thinking.... But when the outsourcing is to an AI, suddenly, it's scandalous? That's all the hypocrisy that's fit to print. So, let's please rewrite the headline: 'Human plagiarises review while AI provides perfectly good words', not 'Human used AI to write book review', as if AI is a version of Clyde to human's Bonnie.
Share
Share
Copy Link
The New York Times severed ties with freelance writer Alex Preston after he used an AI tool to draft a book review that plagiarized content from The Guardian. While Preston admitted to the editing failure, the incident raises critical questions about human responsibility when using AI in journalism and whether the focus should be on the technology or the writer's oversight.

The New York Times has cut ties with Alex Preston, an accomplished freelance writer, after discovering that a January book review he authored contained passages strikingly similar to work previously published in The Guardian
1
. The review of "Watching Over Her" by Jean-Baptiste Andrea bore remarkable similarities to a review by Christobel Kent published in The Guardian last August. A vigilant reader first alerted NYT to the issue, prompting an internal investigation that revealed Preston had used an AI tool to help draft the piece1
.Preston, who has written extensively for major publications including the Financial Times and has six novels under his belt, admitted he "made a serious mistake" and was "hugely embarrassed" by the incident
1
. An NYT spokesperson emphasized that "reliance on AI and inclusion of unattributed work by another writer is a serious violation of The Times's integrity and fundamental journalistic standards"1
. The similarities were substantial, with Preston's text mirroring Kent's description of the novel's characters almost verbatim.While the scandal centers on AI in journalism, the real transgression was human plagiarism, not the technology itself
2
. The AI tool generated text from its training data without intent to plagiarize—it was Preston's responsibility to edit out the problematic sections, which he failed to do2
. This editing failure underscores a critical point: when writers outsource tasks to AI, they remain accountable for the final output. The incident illustrates how even seasoned writers can be lulled into letting their guard down when using technology prone to cobbling together other people's work without attribution1
.According to the editor's note dated March 30, Preston claimed he didn't use AI in previous NYT reviews, and the paper's investigation "found no issues in those pieces"
1
. This suggests the problem wasn't systematic but rather a specific lapse in judgment and oversight.This case adds to mounting journalism scandals involving AI content in journalism. Last month, Ars Technica fired a senior tech reporter after he accidentally included AI-fabricated quotes in an article, claiming the error arose after asking an AI tool to generate notes
1
. Earlier this month, speculation swirled around a piece in NYT's "Modern Love" column that readers accused of sounding "EXACTLY like AI slop"1
.The Atlantic recently published findings from a study using AI detection software that revealed opinion sections at outlets like NYT and The Wall Street Journal were six times more likely to contain AI-generated prose than news articles, suggesting all had likely published AI-written content at some point
1
. The author of the questioned Modern Love column admitted to using ChatGPT as a "collaborative editor" for "inspiration and guidance and correction"1
.Related Stories
The Preston incident matters because it exposes the tension between technological assistance and journalistic integrity. As AI tools become more sophisticated and accessible, newsrooms face the challenge of establishing clear guidelines about acceptable use while maintaining editorial standards. The question isn't whether AI should be banned from journalism, but how writers and editors can use it responsibly without compromising attribution and originality. Readers should watch for how major publications develop policies around AI assistance and whether transparency about AI use becomes standard practice in bylines and disclosures.
Summarized by
Navi
25 Mar 2026•Entertainment and Society

22 Aug 2025•Technology

24 May 2025•Technology

1
Science and Research

2
Technology

3
Policy and Regulation
