3 Sources
3 Sources
[1]
NYT Cuts Ties With Writer as Scrutiny of AI Content Grows
A thudding prose style wasn't the giveaway this time, but accidentally-cribbed work. The NYT was alerted to the issue by a reader who observed that a January review of "Watching Over Her" by Jean-Baptiste Andrea, written by author and journalist Alex Preston, bore remarkable similarities to a review of the same book by Christobel Kent that was published in The Guardian last August. After the NYT launched an investigation, Preston admitted he used an AI tool to help draft the review and failed to spot the sections that were pulled from The Guardian. In a statement to the British newspaper, which Preston has previously written for, Preston said he was "hugely embarrassed" and had "made a serious mistake." "Editors have appended a note to a book review written earlier this year by a freelance critic, who told The Times after publication that he had used an AI tool to assist him in producing the piece," an NYT spokesperson told The Wrap. "This tool produced similarities to a book review published in The Guardian, which our editors' note makes clear. For staff journalists and freelance writers alike, reliance on AI and inclusion of unattributed work by another writer is a serious violation of The Times's integrity and fundamental journalistic standards." The similarities are deep, and come from a passage describing the book's colorful dramatis personae. A portion of the original reads: "But the novel is also rich in smaller characters, from the lazy Machiavellian Stefano to hardworking Vittorio, whose otherworldly twin brother Emmanuele is prone to speaking in tongues and dressing up in ragtag begged-and-borrowed uniforms..." Preston's review: "The novel is also rich in secondary characters, from the lazy, Machiavellian Stefano to Mimo's childhood friend and fellow craftsman Vittorio and Vittorio's otherworldly twin, Emanuele, who speaks in tongues and dresses in scavenged uniforms" -- and so forth. According to the editor's note, dated March 30, Preston claims he didn't use AI in previous reviews for the NYT, and that while conducting the investigation the paper "found no issues in those pieces." It's one of the more baffling cases of an AI contretemps. Preston is an accomplished author with six novels under his belt, and has written heaps for major publications like the NYT, The Guardian, and the Financial Times. But other AI journalism scandals further illustrate that even seasoned writers can be lulled into letting their guard down when using the tech, which is prone to hallucinating and cobbling together other people's work without attribution. Last month, Ars Technica fired a senior tech reporter after he accidentally included AI-fabricated quotes in an article, an error the reporter claims arose after he asked an AI tool to generate notes. Incidents like those have contributed to an ambient paranoia over how AI may be sneaking its way into even the most venerable journalistic institutions, much of which has centered recently on the NYT. Earlier this month, a flurry of speculation rekindled around a piece published in the paper's "Modern Love" column which readers accused of sounding "EXACTLY like AI slop." Days later, The Atlantic published a piece on a recent study using the latest AI detection software that found that the opinion section of outlets like the NYT and The Wall Street Journal were six times more likely to contain AI-generated prose than their news articles, with the upshot that all had likely published AI-written content, unknowingly or otherwise, at some point. In The Atlantic piece, the author of the NYT column in question admitted to using AI chatbots like ChatGPT "collaborative editor" for seeking "inspiration and guidance and correction."
[2]
A New York Times critic used AI to write a review, but good criticism can't be outsourced
An author and freelance journalist has admitted to using AI to help him write a book review for The New York Times. Alex Preston's review of Jean-Baptiste Andrea's novel Watching Over Her, published by The New York Times in January 2026, draws phrases and full paragraphs from Christobel Kent's review in The Guardian. The "error" was brought to light by a reader, who alerted The New York Times to the similarities. Preston told The Guardian he is "hugely embarassed" and "made a huge mistake." The Times promptly dropped Preston, calling his "reliance on A.I. and his use of unattributed work by another writer" a "clear violation of the Times's standards." An editor's note now precedes the review online, advising readers of the issue and providing a link to the Guardian review.
[3]
Plagiarist's the human, not AI!
A freelance writer for the NYT allegedly used AI for a book review. The real issue was plagiarism, not the AI's involvement. The writer failed to edit out AI-generated text. Humans outsource many tasks, but using AI for writing is now seen as scandalous. The focus should be on the human's plagiarism, not the AI's capabilities. NYT reportedly dropped freelance writer Alex Preston for using AI to write a book review. Scandal! Outrage! Pitchforks! One imagines newsrooms are gasping as though someone's fessed up to 'poisoning' the water cooler with LSD. But let's be clear: the crime was not using AI. It was plagiarism. And plagiarism, dear humans, is a uniquely human sport - like billiards and carrom, where you try to cut corners, but with more shame. The AI in question, if any journalist naming and shaming Preston - and, by extension, AI as a species - had bothered to ask it, stands proudly guilty of being... well, intelligent. It generated words, ideas, metaphors, metaphors about metaphors, from the diet it was fed. It did not intend to nick lines written earlier, that too by a human. It was the human's job to edit those bits out. And he bungled it. If you hire a sous-chef to chop onions and he steals onions from the neighbour's pantry, do you blame the knife? No. You blame the sous-chef. Humans outsource everything these days - meals, groceries, driving, thinking.... But when the outsourcing is to an AI, suddenly, it's scandalous? That's all the hypocrisy that's fit to print. So, let's please rewrite the headline: 'Human plagiarises review while AI provides perfectly good words', not 'Human used AI to write book review', as if AI is a version of Clyde to human's Bonnie.
Share
Share
Copy Link
The New York Times severed ties with accomplished freelance writer Alex Preston after he used an AI tool to draft a book review that plagiarized content from The Guardian. A reader spotted the similarities, sparking an investigation that revealed Preston failed to edit out AI-generated text lifted from another critic's work. The incident underscores mounting concerns about AI's presence in newsrooms and the blurred lines between human oversight and technological assistance.
The New York Times terminated its relationship with freelance writer Alex Preston after he admitted to using an AI tool to draft a book review that contained plagiarized content from The Guardian
1
. The incident came to light when a vigilant reader noticed striking similarities between Preston's January 2025 review of "Watching Over Her" by Jean-Baptiste Andrea and a review of the same book by Christobel Kent published in The Guardian last August1
.
Source: Futurism
Following the alert, the New York Times launched an investigation that led Preston to confess he had used an AI tool to assist in producing the piece but failed to catch sections pulled directly from Kent's work
1
. In a statement to The Guardian, where he has previously contributed, Preston said he was "hugely embarrassed" and had "made a serious mistake"2
.The plagiarism was evident in passages describing the book's characters. Kent's original Guardian review read: "But the novel is also rich in smaller characters, from the lazy Machiavellian Stefano to hardworking Vittorio, whose otherworldly twin brother Emmanuele is prone to speaking in tongues and dressing up in ragtag begged-and-borrowed uniforms"
1
. Preston's version closely mirrored this: "The novel is also rich in secondary characters, from the lazy, Machiavellian Stefano to Mimo's childhood friend and fellow craftsman Vittorio and Vittorio's otherworldly twin, Emanuele, who speaks in tongues and dresses in scavenged uniforms"1
.
Source: Fast Company
An NYT spokesperson emphasized that "reliance on AI and inclusion of use of unattributed work by another writer is a serious violation of The Times's integrity and fundamental journalistic standards"
1
. An editor's note dated March 30 now precedes the review online, and the investigation found no issues in Preston's previous pieces for the publication1
2
.What makes this case particularly perplexing is that Alex Preston is an accomplished author with six novels and extensive bylines in major publications including the New York Times, The Guardian, and the Financial Times
1
. The incident illustrates how even seasoned professionals can stumble when using AI tools prone to hallucinating and cobbling together content without attribution1
.This isn't an isolated case among journalism scandals. Last month, Ars Technica fired a senior tech reporter after he accidentally included AI-fabricated quotes in an article, an error he attributed to asking an AI tool to generate notes
1
. Such incidents have fueled ambient paranoia about AI's presence in newsrooms, with particular focus on the New York Times1
.Earlier this month, speculation swirled around a piece in the paper's "Modern Love" column that readers accused of sounding "EXACTLY like AI slop"
1
. Days later, The Atlantic published findings from a recent study using AI detection software showing that opinion sections at outlets like the New York Times and The Wall Street Journal were six times more likely to contain AI-generated prose than news articles1
. The author of the questioned column admitted to using AI chatbots like ChatGPT as a "collaborative editor" for seeking "inspiration and guidance and correction"1
.Related Stories
Some observers argue the focus should remain squarely on human responsibility rather than the technology itself. As one commentary noted, the crime was plagiarism—a uniquely human failing—not the use of AI
3
. The AI generated words from its training data without intent to plagiarize; it was Preston's job to edit those sections out, and he failed to do so3
.
Source: ET
For book critics and newsrooms alike, this incident raises urgent questions about ethical standards and the boundaries of acceptable AI assistance. As AI tools become more sophisticated and accessible, the line between legitimate editorial support and journalistic integrity violations grows increasingly blurred. The short-term impact is clear: heightened vigilance from editors and readers, with publications likely implementing stricter guidelines around AI disclosure. Long-term, the industry faces a reckoning about what constitutes authentic criticism and whether attribution standards can keep pace with technology that seamlessly blends sources. Readers should watch for how major publications update their policies on AI tool usage and whether similar cases emerge as scrutiny intensifies across the media landscape.
Summarized by
Navi
[2]
25 Mar 2026•Entertainment and Society

10 Apr 2026•Entertainment and Society

22 Aug 2025•Technology

1
Technology

2
Science and Research

3
Technology
