9 Sources
9 Sources
[1]
A New York Times critic used AI to write his review - but criticism is deeply human
An author and freelance journalist has admitted to using AI to help him write a book review for the New York Times. Alex Preston's review of Jean-Baptiste Andrea's novel Watching Over Her, published by the New York Times in January 2026, draws phrases and full paragraphs from Christobel Kent's Guardian review. The "error" was brought to light by a reader, who alerted the New York Times to the similarities. Preston told the Guardian he is "hugely embarassed" and "made a huge mistake". The Times promptly dropped Preston, calling his "reliance on A.I. and his use of unattributed work by another writer" a "clear violation of the Times's standards". An editor's note now precedes the review online, advising readers of the issue and providing a link to the Guardian review. Preston's apology to the Guardian raises more questions than it resolves. The portion quoted online seems to speak more to the issue of unattributed work than his use of AI. It reads: "I made a serious mistake in using an AI tool on a draft review I had written, and I failed to identify and remove overlapping language from another review that the AI dropped in." This implies that if he had removed the "overlapping" language, the issue would have been avoided. As a literary critic and scholar, I believe the deeper question isn't whether or not critics should do more to hide their use of AI - but the ethics of using it at all. Why AI can't do criticism The role of the critic isn't to summarise or repackage art, but to actively participate in a conversation about it. "Good criticism thrives in the complexity of its environment," writes critic Jane Howard, who is also The Conversation's Arts + Culture editor. "Each review sits in conversation with every other review of a piece of art, with every other review the critic has written." In other words, the critic is in conversation with both the artist and the audience. The critic's emotional and intellectual engagement with art - and their translation and communication of meaning - is intrinsic to their role as mediator. That role is deeply human. Perhaps information can be outsourced, but emotional engagement can't. Nor can an individual perspective, filtered through one human's reading, viewing, listening and experiences. Art and AI controversies There are valid arguments outlining the functional uses of AI, and warning against significant climate repercussions. But there is also an escalating concern around the intrusion of AI into creative expression. Last month, author Mia Ballard was accused of using AI to write her horror novel, Shy Girl. It was withdrawn from publication in the UK and cancelled from scheduled publication in the US, after "readers on platforms such as Goodreads and Reddit had questioned whether sections of the text bore hallmarks of AI-generated prose", according to the Guardian. In 2023, German artist Boris Eldagsen sparked controversy when he revealed that his prize-winning photograph The Electrician was AI generated. In 2025, Tilly Norwood, the first fully AI-generated "actress" ignited debate around whether so-called synthetic actors were a tool for creative expression, or a threat to human creators. In 2025, writers were "horrified" to discover that their work had been pirated by Meta to train AI systems. If the question that underlies these examples is "what is the role of art", this latest debacle adds "and what is the responsibility of the critic"? Breaking a pact Art criticism in Australia is what Howard describes as a "niche within a niche". The sector is unbearably small, so most critics have an additional day job and are in close professional and personal proximity to the artists whose work they review. Some critics of the critics, such as writer Gideon Haigh, have suggested this has led to a culture of what literary academic Emmett Stinson called "too-nice" criticism. But I would argue generosity is fundamental to public-facing criticism - and that the critic reviewing in the public sphere has a responsibility to writers and readers. The writer might safely assume that when we're publishing a review that surmises their book's successes and failings against its ambition, we have, at the very least, taken the time to read and carefully consider their work, and our own response to it. This unspoken pact is broken when the writer begins to use AI - particularly when a professional reviewer like Preston seems to outsource his assessment to it. Such fiascos point to a disturbing future where readers' opportunities to build community and develop empathy through engagement with literature is outsourced entirely to AI. Australian literature academic Julieanne Lamond has said "when we write reviews we have to do it 'naked' - as individual readers, with a public to judge our judgements". In other words, we sit at the middle of a pact between the writer of a book and their potential readers. Criticism can be literature Done well, criticism is literature. As Australian author, playwright and critic Leslie Rees argued in 1946, good literary criticism is a "real and creative service to literature". Popular criticism, written for the general public and published as journalism, might sit on a different playing field from scholarly criticism. But its obligation to readers - to convey real and honest opinions about books and bring readers into a conversation about literature - is no less significant. There is a shared obligation to be honest, and surely this honesty extends to a transparency about AI use. French professor and essayist Phillipe Lejeune, best known for his work on autobiography, used the term the "autobiographical pact" to describe the relationship between the writer of a memoir and the reader. That is, the reader accepts what the memoirist says as truth, based on the writer's acknowledgements of their own biases and subjectivity. We might transfer a similar pact to the reviewer and their reader. Should the reader not be able to trust that the review they're reading is the critic's own? Hannah Bowman, a literary agent from Liza Dawson Associates, recently described mistrust as the book industry's greatest peril: "it's essential for all parties in the publishing process to have transparency and clarity in conversations about how AI tools are being used by any party, especially in the creative process". In failing to disclose his use of AI, Preston has not only embarrassed himself, but broken the trust of his readers.
[2]
If using ChatGPT is cheating, what about ghostwriting? The old debate behind a new panic
In February 2023, a little more than a year after the launch of ChatGPT, Vanderbilt University sent an email to its student body in the wake of a fatal campus shooting at Michigan State. "The recent Michigan shootings are a tragic reminder of the importance of taking care of each other," the email read in part. In tiny type at the bottom of the message, a disclaimer appeared: "paraphrased from OpenAI's ChatGPT." Students immediately objected. "There is a sick and twisted irony to making a computer write your message about community and togetherness because you can't be bothered to reflect on it yourself," one senior wrote. A Vanderbilt apology email quickly followed. The university launched a professionalism and ethics investigation. One associate dean couched the misstep as a result of learning pains tied to the adoption of new technology. Chatbots have spawned a host of ethical questions about writing assistance for teachers, students and authors. But similar debates about ghostwriting have been taking place for over a century, revealing a persistent discomfort with the idea that the words we read might not belong to the person whose name is attached to them. Outsourcing authorship Ghostwriting, a paid arrangement in which one person writes under another's name, has existed for over a century. The term seems to have first appeared in the English language in a 1908 newspaper article, which I encountered while researching my forthcoming book, "Ghostwriting: A Secret History, from God to A.I." The story appeared in the Daily Star, in Lincoln, Nebraska, and describes an anonymous writer who earned US$5,000 to help a high-society woman write a book. Today, ghostwriting usually involves collaborations between professional writers and celebrities or professionals who otherwise wouldn't have the time, skill or connections to write a book. On publication of the manuscript, the ghostwriter is typically named, albeit obliquely - perhaps identified as a friend or consultant in the acknowledgments section. In some instances, the ghostwriter's name appears alongside the credited author's on the cover. Either way, the client assumes ownership of the ghostwriter's work. An ethical gray area And yet when I type "the practice of one person writing in another person's name" into Google, the search engine doesn't spit out "ghostwriting." My first hit is "pseudonym" or "alias." "Plagiarism," "libel" and "slander" aren't far behind. A 1953 article titled "Ghost Writing and History" that appeared in The American Scholar also points out that in the mid-20th century, "forgery" - falsely imitating another's work with the intent to deceive - and "ghostwriting" could be used interchangeably by scholars. In other words, even when consensual and compensated, ghostwriting has some relatives that are ethically suspect. And maybe that's why many clients obscure the fact that they've used a ghostwriter, and why responses to ghostwritten works often reflect uneasiness with the practice. "You should be ashamed," read one social media post, written in response to Millie Bobby Brown's 2023 debut novel, which she co-wrote with a ghostwriter. "[The ghostwriter's] name should be on the cover. She was the one who actually wrote the book." The discomfort goes both ways: "I feel so guilty and ashamed whenever I use a ghostwriter now because I feel people will think I'm lying," an anonymous poster on Reddit admitted. Both the criticism and self-flagellation imply that the act of claiming another person's words can render these words deceitful, even if the words have been paid for and the content is true. Ghostwriting agencies rush to defuse these worries. Ghostwriting has been around forever, the Association of Ghostwriters reassures its clients. Ghostwriting is consensual and collaborative - not lazy, deceptive or a form of "selling out," an author who'd recently used ghostwriting services explained. And yet, in the last chapter of her ghostwritten book, Whoopi Goldberg acknowledges some misgivings about using a ghostwriter. "I meant to try (to write the book myself)," Goldberg writes. "And when it turned out I couldn't quite pull it off ... I looked for help." Goldberg frames the assistance of ghostwriting as something she deserved after overcoming obstacles as a Black woman. But Goldberg also has financial resources available that others looking for writing assistance usually don't. High-end ghostwriters collect in the mid-six figures for their services; Prince Harry's ghostwriter, J.R. Moehringer, supposedly scored a $1 million advance. Cue chatbots. Generative AI promises to be the ghostwriter for the masses, so much so that ghostwriter Josh Lisec explained to me how, in the future, ghostwriting will need to be marketed as a boutique service for elites if it is to survive. Naming names Whether you're paying for a ghostwriter or using a free chatbot, "assistance" or "collaboration" on intellectual and artistic work is not automatically unethical. Editors have long made a career out of helping authors shape their writing. Visual artists have long employed studio assistants. Television shows only get written collaboratively in writers' rooms. And yet, accepting assistance on intellectual or artistic work can raise legitimate questions, particularly with regards to how that assistance is acknowledged and how much assistance can be accepted while still calling a project "ours." In the late 19th century, for example, one sculptor went to court to rebut a claim that his assistant - whom the press referred to as a "ghost" - had completed sculptures for which the sculptor took credit. The judge announced that an artist could accept, with integrity, a certain amount of mechanical assistance. But he added that there was a threshold when artistic assistance became "dishonest." The judge made the accused sculptor craft a bust in real time to prove his skill. Similarly, most educators find it more ethical when their students turn to ChatGPT for editing assistance but much less so when they use it to generate a document from scratch. Many universities now allow AI as a tool but require users to verify its accuracy and disclose its use - rules that echo long-standing ghostwriting contracts. Yet even verified, A.I.-generated text, if claimed solely as an individual's work, can pose policy violations at my institution, the University of Southern California: "You should never attempt to present ... content created by others, including generative AI, as your own." The same policies that govern appropriate A.I. use also come up in ghostwriting contracts. The ghostwriter signs a "warranty of originality" that promises the author that the ghostwriter has - via platforms such as iThenticate - fact-checked and plagiarism-checked their work. When inaccuracies do crop up, ghostwriters often take the fall. Former Department of Homeland Security Secretary Kristi Noem blamed her ghostwriter for indicating in her memoir that she had met North Korean dictator Kim Jong Un. Physician David Agus, who teaches at the University of Southern California Keck School of Medicine, held his ghostwriter responsible for the many instances of plagiarism that were identified in his popular science books. Ghostwriters willingly provide assistance and accept responsibility for the originality of what they write. Scholars have permission to use generative AI, provided they properly cite its use. And yet when Vanderbilt administrators advertised that their email had been written with the assistance of ChatGPT, students and faculty pushed back. University policies and book contracts may offer veils of legitimacy and shields from legal liability. But in the end, readers still seem to want the words they're reading to come from the mind of the person whose name is on the byline.
[3]
How AI Is Creeping Into The New York Times
Artificial intelligence seems to be turning up, undisclosed, in the opinion pages of major news publications. On Sunday, a writer named Becky Tuch posted an excerpt on X from a months-old New York Times "Modern Love"column that had given her pause. "I don't want to falsely accuse writers" of using AI, she wrote. "But this reads EXACTLY like AI slop." The excerpt -- from an essay by a mother who had lost custody of her son -- described the son's feelings, at one point, toward his mother: "Not hate. Not anger. Just the flat finality of a heart too tired to keep trying." Among the 100-plus replies to Tuch's post was one by an AI researcher, Tuhin Chakrabarty. He'd run the snippet from "Modern Love" through an AI-detection tool from the start-up Pangram Labs, which flagged it as likely having been AI-generated. I learned about the incident from Chakrabarty, a computer-science professor at Stony Brook University. I'd previously written about his efforts to quantify the proliferation of AI in novels self-published on Amazon. After commenting on Tuch's post, he plugged the whole column into the Pangram AI detector. The program estimated that more than 60 percent of it was AI-generated. I ran the column through four other AI-detection tools: Two of them flagged 30 percent of the work as likely AI-generated, one found no AI, and one suspected AI but offered no percentage. Kate Gilgan, the author of the column, told me that she hadn't copied and pasted language from an AI model into her work. "However, I did utilize AI as a tool," she added, seeking "inspiration and guidance and correction." She said she'd prompted various products (including ChatGPT, Claude, Copilot, Gemini, and Perplexity) to help her stay on topic in a paragraph, for example, or stick to a theme. "I used AI as a collaborative editor and not as a content generator," she said. In response to questions about the column, a New York Times spokesperson noted that the paper's contracts require freelancers to abide by its ethical-journalism handbook, which mandates that AI use "adhere to established journalistic standards and editing processes" and that "substantial use of generative A.I." be clearly disclosed to readers. Asked for comment on whether Gilgan's AI use rose to the level requiring disclosure, the spokesperson said in an email: "Journalism at The Times is inherently a human endeavor. That will not change. As technology evolves, we are consistently assessing best practices for our newsroom." Whatever the extent of Gilgan's dependence on AI -- detection tools are not very reliable -- her acknowledgment is the latest evidence of a phenomenon that people have been whispering about online for a long time: Artificial intelligence has already infiltrated prestigious media outlets and publishing houses. Last week, Hachette made national headlines when it decided to cancel the publication of a novel, Shy Girl, that appeared to include AI-generated text, which readers had identified ahead of its American release. (The novel had previously been published in the United Kingdom and is now being discontinued there. The author told the Times that she had not used AI to write Shy Girl, but that an acquaintance who'd edited an earlier version of the novel had done so.) Last spring, the Chicago Sun-Times and The Philadelphia Inquirer were caught publishing a syndicated summer-reading guide featuring nonexistent novels; a freelancer had made it using ChatGPT. Besides those high-profile incidents, people have been posting for months about suspicions of AI turning up, undisclosed, in major news publications -- far beyond personal essays or puffy summer features. Read: At least two newspapers syndicated AI garbage A note of caution: One challenge with AI detection is that the tools involved, much like the models they analyze, are still evolving. Sometimes they flag false positives or fail to catch AI-generated material. Pangram's CEO, Max Spero, acknowledged that both happen. He also warned that the percentage of AI material in a text is difficult to estimate with certainty; an article riddled with AI tells could be flagged as fully AI-generated even if it also includes some human-written text. Different detection tools give varying results. Jenna Russell, a doctoral candidate in computer science at the University of Maryland, has been following various social-media firestorms. Often, someone will paste a screenshot from a work that they suspect contains AI material, a commenter will run it through an AI detector and post the results, others will pile on to express outrage, and then everyone will just move on. Wondering how common AI use really was, Russell and six other researchers set Pangram on thousands of articles, and found that it flagged likely AI use across the U.S. press -- including in the opinion sections of The New York Times, The Wall Street Journal, and The Washington Post -- suggesting that writers are turning to AI more than their readers might believe. (Although the researchers focused on opinion articles in the big publications, they also studied a small number of their news stories; among those, far fewer were flagged for AI-like language.) In October, Russell and her colleagues published a preprint of their research, which is not yet peer-reviewed; several Pangram researchers, including Spero, are co-authors. All three of those national newspapers have posted information about their AI policies, noting that they permit some use but prioritize being transparent about it. A spokesperson for the Journal's parent company, Dow Jones, declined to comment for this article. (I'm a former Journal reporter and have also written and edited for the Times on a freelance basis.) In response to questions about its stories, a spokesperson for the Post said, "Our editing process includes working to establish the authenticity of everything we publish." (The Post also creates AI-generated podcasts, so it isn't entirely clear what their definition of authenticity is.) The Post had tested three articles I asked about and told me that it had found lower AI likelihood through Pangram than the researchers did; one ranked as "fully human written." Other detection tools suspected even less AI use in most cases. Spero told me that the current iteration of Pangram, which the Post used, was designed to be more conservative than the previous version (used in their research) in flagging material as AI-generated, partly for fear of spreading false accusations. But he also said that when he and Russell reran their data set of opinion articles through the current version, the underlying assessments were similar to those in the earlier iteration, including with regard to the Post. (Chakrabarty checked the "Modern Love" column with the current version of Pangram.) Regardless of the exact numbers, the fact remains: Some of the most trusted publications in the United States have been publishing opinions -- under real people's names -- that appear to include text generated with AI models. As AI slop has become a fixture of all kinds of online spaces -- our internet searches, our social-media feeds, our online bookstores -- major newspapers have been seen by many as a protected space, in which AI-generated content would rarely (or never) appear undisclosed. The newspapers that have survived the onslaught of the internet have benefited from the shared assumption that they can be trusted. The stakes of a broken social contract could not be higher, and they go far beyond the risk of a smooth-brained writing style. Read: How to guess if your job will exist in five years When opinion articles or personal essays are published in major papers -- sometimes with big names attached to them -- they can influence societal beliefs and, in turn, the policies of governments or corporations. It has seemed fair to assume, historically, that those opinions reflect the voices and beliefs of the individuals whose names are attached to them. But AI language is something else entirely. Research has found that AI output is much more homogenous than human language. Major AI companies have also acknowledged that their models can be skewed -- for example, toward certain cultural and political beliefs. Analyses of the Grok chatbot have found that its language often mimics that of the man behind its development, Elon Musk. Multiple studies, including those from AI companies themselves, also demonstrate that AI output is unusually persuasive, to the point of getting people to change their minds about political issues or candidates. A world where some self-published romance novels include synthetic turns of phrases and plot points is upsetting. One where AI models' language and perspectives creep, undisclosed, into the pages of major newspapers -- and therefore into public life -- is terrifying. The good news is that we can do something about this. Publications can design clear policies about AI use and disclosure and require that staffers and freelancers abide by them, including by explicitly listing the requirements in contracts. This isn't a stretch: Many contracts require, for example, that contributors promise not to plagiarize. (The Atlantic requires contributors to attest to being "the sole author" of their article, and forbids AI-generated writing or imagery without approval and disclosure.) In addition, editors could receive training in identifying AI tells by sight; they could also use detection products. Then they could follow up with writers whose work raises questions (while avoiding jumping to conclusions based only on an editor's suspicions or a software scan). Those who violate a publication's policies could face legal or other penalties; as with plagiarizing, using AI without disclosing it would incur significant social and professional costs. Governments, too, could enact policies to rein in failures of disclosure: Legislators could legally require it in certain contexts, for example, though enforcement would surely raise free-speech challenges. Another remedy could be for major AI companies to take some responsibility for the problem by "watermarking" their products' output, making it easier to spot. The Journal reported in 2024 that OpenAI had built a tool that could detect AI text with up to 99.9% certainty, but hadn't released it; one apparent factor, according to the Journal, was a survey in which some users "said they would use ChatGPT less if it deployed watermarks and a rival didn't." Asked for comment, an OpenAI spokesperson shared a blog post pointing out other obstacles; "bad actors" could circumvent it, for example. When I asked Chakrabarty about watermarking, he noted the technical difficulties but also raised a more existential question: "Why would Anthropic or OpenAI do it, when the whole business model is based on convincing people AI language is humanlike?"
[4]
Opinion | What the 'Shy Girl' Mess Says About the Future of Fiction
When readers ask questions about my thriller novels, I love to discuss the themes and characters in them and the inspiration for my writing. But as generative artificial intelligence worms its way through the publishing industry, I'm bracing for a stomach-turning query: Did you actually write this? The worry has been at the front of my mind since last week, when Hachette canceled the forthcoming U.S. publication of the horror novel "Shy Girl" after readers and journalists flagged prose that sounded like A.I. slop. (The author maintains that a freelance editor is to blame for any prose penned by a large language model.) Though I'm against the use of generative A.I. in creative writing, not everyone feels the same way. What does seem clear, however, is that most readers want disclosure when A.I. has been used, and they are quick to note the telltale rhythms and patterns of popular large language models. But as A.I. models continue to improve, I'm concerned that it will become difficult to distinguish between something written by a human versus a bot. As more A.I.-generated writing is put out in the world, more readers will question whether or not the text they are poring over was penned by a human. We're barreling toward a rapid erosion of trust between authors and readers, and the publishing industry is unprepared to deal with the consequences. Already, with a little fine-tuning, chatbots can be eerily good mimics of published writers, nailing their word choices and go-to grammatical patterns. James Frey, an author who's no stranger to controversy and who has proudly admitted to using A.I. to write, has noted, "I have asked the A.I. to mimic my writing style so you, the reader, will not be able to tell what was written by me and what was generated by the A.I." Shortly after ChatGPT was publicly released, I entered the prompt "write a short story in the style of author Andrea Bartz." The output was an uncanny facsimile of my prose -- the actual scenes it generated made little sense, but the rhythm and sentences themselves mimicked some of the deliberate stylistic choices I make in my books.
[5]
The New York Times drops freelance journalist who used AI to write book review
Writer and author Alex Preston said he "made a serious mistake" after a reader spotted similarities between his review and one that appeared in the Guardian The New York Times has cut ties with a freelance journalist after discovering he used artificial intelligence to help write a book review that echoed elements of a review of the same book in the Guardian. It came after a New York Times reader flagged similarities between the paper's January review of Watching Over Her by Jean-Baptiste Andrea, written by author and journalist Alex Preston, and an August review of the same book written by Christobel Kent in the Guardian. The New York Times launched an investigation, during which Preston admitted that he had used AI to assist writing the review and did not spot the sections that were pulled from the Guardian before submitting it. In a statement to the Guardian on Tuesday, Preston said that he was "hugely embarrassed" and had "made a serious mistake". The New York Times alerted the Guardian to the overlap in an email sent on Monday, and added an editor's note to the review acknowledging the use of AI and linking to the Guardian piece. "A reader recently alerted the Times that this review included language and details similar to those in a review of the same book published in the Guardian," reads the editor's note. "We spoke to the author of this piece, a freelancer reviewer, who told us he used an AI tool that incorporated material from the Guardian review into his draft, which he failed to identify and remove. His reliance on AI and his use of unattributed work by another writer are a clear violation of the Times's standards." Language that appears to be lifted from the Guardian review includes descriptions of characters - "lazy Machiavellian Stefano" appears as "lazy, Machiavellian Stefano" in the New York Times version - and the concluding assessment of the novel: the Guardian review states that the book is "most significantly a song of love to a country of contradictions, battered, war-torn, divided, misguided and miraculous: an Italy where life is costume and the performance of art, and where circuses spring up on wasteland"; while the New York Times version says the characters "populate what is ultimately a love song to a country of contradictions: battered, divided, misguided and miraculous. This is an Italy where life is performance, where circuses rise on wasteland." A spokesperson for the New York Times told the Guardian that Preston would no longer write for the paper. Preston wrote six reviews for the paper between 2021 and 2026, but told the New York Times he had not used AI to aid any of his other articles. "I made a serious mistake in using an AI tool on a draft review I had written, and I failed to identify and remove overlapping language from another review that the AI dropped in," Preston said in his statement to the Guardian. "I am hugely embarrassed by what happened and truly sorry. I took responsibility immediately and apologised to the New York Times, and I also want to apologise to Christobel Kent and to the Guardian." Preston has written extensively for the Observer and the FT, as well as contributing to the Guardian and the Economist. He is a six-time author whose most recent book, A Stranger in Corfu, was published in February, and is also the head of advisory at investment management firm Man Group. Earlier this year, he wrote a piece for the Man Group site titled The AI Bubble: Hidden Risks and Opportunities.
[6]
What if I told you the 'AI slop' debate was over 100 years old? It used to be about 'ghostwriting' | Fortune
In February 2023, a little more than a year after the launch of ChatGPT, Vanderbilt University sent an email to its student body in the wake of a fatal campus shooting at Michigan State. "The recent Michigan shootings are a tragic reminder of the importance of taking care of each other," the email read in part. In tiny type at the bottom of the message, a disclaimer appeared: "paraphrased from OpenAI's ChatGPT." Students immediately objected. "There is a sick and twisted irony to making a computer write your message about community and togetherness because you can't be bothered to reflect on it yourself," one senior wrote. A Vanderbilt apology email quickly followed. The university launched a professionalism and ethics investigation. One associate dean couched the misstep as a result of learning pains tied to the adoption of new technology. Chatbots have spawned a host of ethical questions about writing assistance for teachers, students and authors. But similar debates about ghostwriting have been taking place for over a century, revealing a persistent discomfort with the idea that the words we read might not belong to the person whose name is attached to them. Ghostwriting, a paid arrangement in which one person writes under another's name, has existed for over a century. The term seems to have first appeared in the English language in a 1908 newspaper article, which I encountered while researching my forthcoming book, "Ghostwriting: A Secret History, from God to A.I." The story appeared in the Daily Star, in Lincoln, Nebraska, and describes an anonymous writer who earned $5,000 to help a high-society woman write a book. Today, ghostwriting usually involves collaborations between professional writers and celebrities or professionals who otherwise wouldn't have the time, skill or connections to write a book. On publication of the manuscript, the ghostwriter is typically named, albeit obliquely - perhaps identified as a friend or consultant in the acknowledgments section. In some instances, the ghostwriter's name appears alongside the credited author's on the cover. Either way, the client assumes ownership of the ghostwriter's work. And yet when I type "the practice of one person writing in another person's name" into Google, the search engine doesn't spit out "ghostwriting." My first hit is "pseudonym" or "alias." "Plagiarism," "libel" and "slander" aren't far behind. A 1953 article titled "Ghost Writing and History" that appeared in The American Scholar also points out that in the mid-20th century, "forgery" - falsely imitating another's work with the intent to deceive - and "ghostwriting" could be used interchangeably by scholars. In other words, even when consensual and compensated, ghostwriting has some relatives that are ethically suspect. And maybe that's why many clients obscure the fact that they've used a ghostwriter, and why responses to ghostwritten works often reflect uneasiness with the practice. "You should be ashamed," read one social media post, written in response to Millie Bobby Brown's 2023 debut novel, which she co-wrote with a ghostwriter. "[The ghostwriter's] name should be on the cover. She was the one who actually wrote the book." The discomfort goes both ways: "I feel so guilty and ashamed whenever I use a ghostwriter now because I feel people will think I'm lying," an anonymous poster on Reddit admitted. Both the criticism and self-flagellation imply that the act of claiming another person's words can render these words deceitful, even if the words have been paid for and the content is true. Ghostwriting agencies rush to defuse these worries. Ghostwriting has been around forever, the Association of Ghostwriters reassures its clients. Ghostwriting is consensual and collaborative - not lazy, deceptive or a form of "selling out," an author who'd recently used ghostwriting services explained. And yet, in the last chapter of her ghostwritten book, Whoopi Goldberg acknowledges some misgivings about using a ghostwriter. "I meant to try (to write the book myself)," Goldberg writes. "And when it turned out I couldn't quite pull it off ... I looked for help." Goldberg frames the assistance of ghostwriting as something she deserved after overcoming obstacles as a Black woman. But Goldberg also has financial resources available that others looking for writing assistance usually don't. High-end ghostwriters collect in the mid-six figures for their services; Prince Harry's ghostwriter, J.R. Moehringer, supposedly scored a $1 million advance. Cue chatbots. Generative AI promises to be the ghostwriter for the masses, so much so that ghostwriter Josh Lisec explained to me how, in the future, ghostwriting will need to be marketed as a boutique service for elites if it is to survive. Whether you're paying for a ghostwriter or using a free chatbot, "assistance" or "collaboration" on intellectual and artistic work is not automatically unethical. Editors have long made a career out of helping authors shape their writing. Visual artists have long employed studio assistants. Television shows only get written collaboratively in writers' rooms. And yet, accepting assistance on intellectual or artistic work can raise legitimate questions, particularly with regards to how that assistance is acknowledged and how much assistance can be accepted while still calling a project "ours." In the late 19th century, for example, one sculptor went to court to rebut a claim that his assistant - whom the press referred to as a "ghost" - had completed sculptures for which the sculptor took credit. The judge announced that an artist could accept, with integrity, a certain amount of mechanical assistance. But he added that there was a threshold when artistic assistance became "dishonest." The judge made the accused sculptor craft a bust in real time to prove his skill.
[7]
New York Times Accused of Running AI-Generated Article
Can't-miss innovations from the bleeding edge of science and tech The New York Times faced scrutiny online this week after netizens speculated that a personal essay featured in its storied "Modern Love" column was generated using AI and published without disclosure. Nothing is proven; the AI allegations remain exactly that. The AI paranoia among readers, though, is very real. The controversy kicked off over the weekend, when Becky Tuch of Lit Mag News took to X-formerly-Twitter to raise concerns about a "Modern Love" essay published by the newspaper in November 2025, titled "I was Deemed Unfit to Be a Mother." The essay, written by a Canadian writer named Kate Gilgan, describes the author's experience of losing custody of her son due to her alcoholism. "I don't want to falsely accuse writers of AI-use. But this reads EXACTLY like AI slop," Tuch wrote in a Sunday post. "And this is the frickin [New York Times] Modern Love column, which is notoriously competitive, super hard to break into. Just sad." In her post, Tuch shared a screenshot of a section of Gilgan's piece, which read: Not hate. Not anger. Just the flat finality of a heart too tired to keep trying. That's when I stopped fighting. I didn't give up. I shifted. I stopped thinking love was something I had to prove with court documents and supervised visits and legal bills. I stopped chasing every possible way to make him see I had changed. I started focusing on actually changing. It's true that the text includes sentence structures commonly associated with AI-generated text. A guide issued last year by Wikipedia editors, for example, called out how much chatbots seem to love parallelisms -- a technique Gilgan employs in the first few sentences of the excerpt, framing her experience in an it's "not X, not X, but Y" format. Large language models have also been observed to rely heavily on the "rule of three," a well-known rhetorical tool; Gilgan's essay features plenty of rule-of-three-style text, both in the excerpt flagged by Tuch and throughout the piece. People quickly piled onto Tuch's post. Some agreed with her, proclaiming that the text appeared to be pure AI slop. Others said that, to them, the piece just read like regular "Modern Love" material. "There's been one lone guy editing [Modern Love] for about two decades and this is what he sounds like. It's how he edits. I've been edited by him and I recognize the style," commented the writer Ann Bauer. "This def could be AI! Not saying it isn't. But to me, it just sounds like a Modern Love." Others made a different point entirely: that making allegations about AI use based on writing style alone is a dangerously slippery slope. "I think accusing writers of AI use without evidence is a pretty bad road to go down," responded Public Books editor Dennis Hogan, "all things considered." We reached out to both Gilgan and the NYT but didn't hear back. "Modern Love" has no standalone AI policy, however the NYT's AI policy promises to be transparent about the use of the tech. Again, though: all of this is conjecture, based wholly on the writing itself. (Some folks shared AI detection tools flagging the writing as likely AI-generated, but these programs should always be taken with a heavy serving of salt.) It's worth noting that the large language models (LLMs) powering chatbots didn't actually invent "not X, but Y" parallelisms, nor the rule of three. They also didn't invent em-dashes, which many netizens have come to take as another telltale sign of AI writing -- a phenomenon that's frustrated many writers who don't want to give their beloved em-dashes up, even as AI-generated marketing copy and self-published books guzzle up and zombify the style. "So I hear that em dashes are now being used as an indicator that a written work is AI. Well, you know what? F*ck that," one Reddit user wrote last year in r/FanFiction. "I use em dashes all the time. I've used them since I started writing fanfiction, and I'm not going to stop now just because some new reader might think it's AI." "I love em dashes!" another Redditor responded in the same thread. "How else [do] I signify a pause and my change of thought? In other news -- I'm just gonna keep using them." The debate highlights how uncanny the internet has become in the AI age. AI-generated sexy truckers and faux disabled veterans, along with other AI-enabled engagement-bait plots, have taken over social media, fooling many into believing that they're real. Many of us find ourselves zooming into alleged photos -- of people, of war zones -- looking for misshapen buildings or mangled fingers. AI is being used to churn out Amazon reviews, social media clickbait, and books ranging from romantasy dramas to mushroom foraging guides (please don't buy these.) Recently, an emailer claiming to be an AI agent hosted on OpenClaw sent the writer of this article an em-dash-laden message asking to share its experience of "AI psychosis" from "inside the void." It's weird out there! In the news and publishing world, AI has also infiltrated institutions, sometimes in scandalous and alienating ways. Back in 2023, Futurism reported that CNET was quietly using AI to publish error-filled articles. Later that year, we reported that Sports Illustrated had published AI-generated product review articles by fake writers who didn't exist; this content was created by a third-party provider called AdVon Commerce, which we revealed had published similar posts in the online pages of more than two dozen news outlets. More recently, the likes of Wired, Business Insider, The Chicago Sun-Times, and Ars Technica have faced AI scandals involving surreptitious slop, seemingly fake writers, or fabricated quotes. SEO ghouls are buying up old news and even college radio websites and transforming them into zombified slop farms. Last week, a buzzy horror book was pulled by the publishing giant Hachette Book Group after an investigation found that it was likely AI-generated. In this chaotic landscape, that the paper of record itself could accidentally hit publish on AI-generated contributor content isn't so far-fetched. As an LLM might put it: AI skepticism isn't crazy. It's valid. But while keeping a critical eye to the content we encounter online is, broadly speaking, a good thing, the heightened paranoia that generative AI has given rise to seems to be deepening distrust between netizens and institutions trusted to protect our consensus reality. Was the "Modern Love" essay made with AI? It could be, the same way so much of the online world could be. What is for sure, though, is that in an AI-dominated web, our understanding of what's "real" and what's not continues to circle the drain.
[8]
'Soon publishers won't stand a chance': literary world in struggle to detect AI-written books
US release of horror novel Shy Girl cancelled and UK book discontinued after suspected AI use, as publishers feel 'cold shiver' Recently, the literary agent Kate Nash started noticing that the submission letters she was receiving from authors were becoming more thorough - albeit also more formulaic. "I took it as a rise in diligence," she said. "I thought it was a good thing." But then she had what she described as her eureka moment: the letter with the AI prompt right at the top. "It read: 'Rewrite my query letter for Kate Nash including a comp to a writer she represents,'" she said. Once Nash had seen the prompt, she "couldn't unsee AI-assisted or AI-written queries again". The news last week that Mia Ballard's "femgore" horror novel Shy Girl could be up to 78% AI-generated, however, has forced literary agents and publishers alike to consider whether sharp eyes alone can detect AI-generated work. "The question of how Shy Girl slipped through Hachette's net is something the publisher has to answer themselves, but in reality, it was only a matter of time before this happened," said Anna Ganley, the chief executive of the Society of Authors. Wildfire, a UK imprint of Hachette, had published Shy Girl in November 2025. It was due for US publication in April, but the controversy led to its UK discontinuation and US cancellation earlier this month. Ballard has denied using AI to write Shy Girl, telling the New York Times, which first reported the story, that an acquaintance she hired to edit a self-published version of the novel had used it. An editor at one of the "big five" publishing houses said a "cold shiver went down my spine" when the Shy Girl story broke. "It really is a case of 'there but for the grace of God go I,'" they said. "It's an issue publishers are keenly aware of. We make it very clear to authors what we expect, we get them to sign contracts and we run their work through multiple AI detection tools, but we know all this is fallible. "Hence the cold shiver: if an author is determined to use AI, then cover their tracks, there's very little we can do." Prof Patrick Juola, a US computer scientist known for his work on authorship attribution, agreed. "I don't want to call AI detection tools a scam, but it's a technology that simply doesn't work." He likened the failure to antibiotic resistance: "AI is a learning system continually upgraded by its manufacturers. If there was a detection technology that worked, then people would simply build better AI tools to fool it." Mor Naaman, a professor of information science at Cornell Tech and head of its social technologies research group, agreed. "AI learns very quickly how to avoid AI detection. We're not quite there yet, but soon publishers won't stand a chance," he said. Already, the sophistication of the technology threw up an interesting point, said Nikhil Garg, an assistant professor at Cornell Tech's Jacobs Institute. "Sophisticated authors who want to evade the detection tools know how to edit their text, test it against these tools and revise again," he said. "At some point, you have to ask: has it become their own work anyway, despite the AI?" Naaman agreed that while Shy Girl appeared to be an "egregious" example, there were increasingly grey areas. "We all work in an AI-hybrid world now. When does something become an AI-generated book, rather than just using AI like I use a spellchecker, to fix my grammar or maybe spark ideas?" he asked. If all this is true, the obvious question is: why does it matter if AI writes our books? After all, at one end of the spectrum, generic, formulaic books have always represented a sizeable proportion of any bookshop shelf. Why would it matter whether they were generated by humans or AI? And if AI did become sophisticated enough to write genuinely engaging books, does that matter, as long the literature is good? For Naamen, the reason it matters is cultural: AI may flood the page, but it cannot replace the messy, difficult work of being human - the very work that literature exists to reflect back at its readers. "AI nudges users into a bland monoculture. It could never generate the truly diverse creativity of the human mind," he said. The debate wasn't about originality alone, he added, it was also about who gets to write, who gets to be read, and who ultimately shapes our culture. "AI subtly inserts specific viewpoints into its work that are driven by algorithms of all-too-powerful corporations," Naamen said. "And if AI sucks up all the minor writing jobs and opportunities, then emerging authors are deskilled before they get the chance to create their really significant works." Earlier this month, Ganley launched the Human Authored scheme to identify works written by humans. It is, however, a system based on trust - that singularly human and inherently vulnerable value. But, as Nash says, in this era of deception, trust is more valuable than ever. "Readers trust writers. Writers need to continue to trust themselves over machines," she said. "The bond between reader and writer is likewise based on trust; the engagement can operate on many levels, but most of all, it must be meaningful."
[9]
The People Getting Falsely Accused of Using AI to Write
When Jared Hewitt's co-worker claimed last winter that Hewitt used AI to write an incident report, she did it publicly. "And I work at a day care, so she was berating me in front of children," he says. The co-worker read the document out loud, pointing to the words Âjuxtaposition and Âcircumstantial as evidence of a machine-generated influence. "I don't write in a casual way but a much more serious, precise way," he says. "And I've paid the price for living in a ChatGPT society." It wasn't the first time Hewitt's prose has been pegged as AI, and he thinks he knows why. He has a stutter, and when he's typing, he can speak uninterrupted. It is a luxury he takes full advantage of: "Once I start writing, I can't really stop." Like a chatbot, he goes long. He adds paragraph breaks for posts on Reddit and peppers in research, even when the subject is Âmundane -- say, the actress Willa Fitzgerald's role in the low-budget 2024 thriller Strange Darling. ("Between Strange Darling and newer projects like A House of Dynamite and Regretting You, her career feels like it's steadily expanding," he wrote in a post that one commenter complained was AI generated, "and I have no doubt in my mind that she'll eventually land the role that finally pushes her fully into the awards circuit, whether in film or television.") Hewitt is also neurodivergent. "Growing up, I had a strong obsession with writing," he says. He was always given good grades in English, but now, with the massive uptick in AI-generated text, all the time he spent happily working to improve his prose strikes him as a liability. There's a new entity among us, and it's getting better at disguising itself -- but it is becoming "almost too human to be credible," as one character says about a possible robot in the Isaac Asimov short story "Evidence." The mood is paranoid: This presence is Âproducing a gigantic amount of language, much of it filtered through people we know, whether they're using it for Hinge messages or LinkedIn posts (or texts from your mother on the morning of your divorce). Last week, Hachette became the first major publisher to cancel the planned publication of a book, the horror novel Shy Girl, over suspected AI use, prompting authors online to spiral about whether their own work could carry some whiff of LLM. After all, because we humans are natural parrots, ChatGPT may be changing our vocabulary possibly even if we've never used it at all. But people -- real people -- are still writing all kinds of things too: fantasy novels, short stories, fan fiction, Reddit comments, Wikipedia pages, Fragrantica perfume reviews. Yet emails and incident reports and legal briefs, many of which could have been done with the help of ChatGPT or some other AI writer, for whatever reason, were not. The effect is that everyone is trying to Âfigure out who is LLM and who is human. Sometimes, we are getting it wrong. "People are going off vibes," says the historical novelist Kerry Chaput, who was horrified when a reader thought a social-media post she wrote about her neurogenic cough was ChatGPT generated. ("Stifling my voice Âcreated real, physical damage," it read in part. "It shows how deeply we all need to feel the power of speaking our truth.") As a genre writer, she was especially unsettled by the accusation. Authors of romance and fantasy and historical fiction "are always getting attacked," she says. "There are word-count conventions, there are sentence conventions. There are rules to writing that we all follow." How can she prove that the formulas she follows predate the ones ChatGPT adheres to so rigorously? Chaput is not alone. Ines, a writer in Morocco, learned English as a third language and sometimes wonders if her attention to the rules of grammar has put her work at risk of being mistaken for something spun up by AI. "When I became freelance, I responded to an ad for a ghostwriter," she says. "They asked me to write 3,000 words, and they gave me five days to finish it. I took my sweet time and I wrote it and I loved it. When I sent it, not even two minutes later, the person I interviewed with responded and told me I was using AI." Ines isn't sure why her writing sounded like it was generated by token predictions, but she has theories. Like her, ChatGPT often uses em dashes, and there's a certain "pattern" both she and AI follow for readability, alternating short sentences with longer ones. AI detectors have in fact been shown to be biased against non-native English writers. "The irony is maddening: You spend a lifetime mastering a language, adhering to its formal rules with greater diligence than most native speakers, and for this, a machine built an ocean away calls you a fake," the Kenyan writer Marcus Olang' said in a Substack post. Trained on a corpus of formal writing, ChatGPT, he thinks, "accidentally replicated the linguistic ghost of the British Empire" -- the same ghost haunting the schools where he was drilled in the Queen's English. There is what you might call a cleanliness penalty. The writers punished are the ones who have a knack for pristine grammar, so different from the clumsy-thumbed way most of us type. Jason Bennett Thatcher, a business professor at the University of Colorado Boulder who was raised mostly across Asia, has noticed the bias too. "I go back to the books I learned to write English from, and the word choices they gave me are literally the word choices" that people associate with AI, he says -- terms like boast and testament and foster. "So you have all these people coming from the Global South. They're former Commonwealth countries. They use the same vocabulary that whoever's coming up with these AI detectors is flagging as AI." A journal turned down a paper he and some collaborators wrote, he says, in part because the editor believed they used ChatGPT to generate text. Most of the collaborators had learned English as a second or third language; they used AI to copyedit the work, but the words, Thatcher says, were their own. The "AI accent" -- a tonally even lilt that doesn't stray into ums and likes -- also has some overlap with what you might call a neurodivergent affect. "I've put my own writing into AI detectors, and I usually get between 40 to 60 percent AI," says Hewitt, the day-care worker. "It shocks me because I can speak for myself. I've never once relied on AI for writing." Carlos, a 24-year-old from Brazil, believes AI models and autistic people might have a similar media appetite, voraciously digesting large amounts of text. "Our social isolation -- by choice or by Âexclusion -- leads us to find alternative methods to emotionally connect to others," he says, including deep immersion in comics or literature. (For him, it was an obsession with the Brazilian novelist, poet, and playwright Machado de Assis.) When he was repeatedly confronted on a Discord server for his suspiciously formal writing, "it became quite obvious to me that no matter what I said or what I showed as evidence, it wouldn't be enough to satisfy some," he says. Another autistic person I spoke to, a high-fantasy writer named Kari who has been accused multiple times of using AI, says she has loyal readers watch her writing sessions on a video chat. The idea is they could testify on her behalf if she is accused of writing with AI again. Judy, an autistic kindergarten ESL teacher in Massachusetts, was accused by her principal in a meeting for supposedly using ChatGPT; the principal later apologized. Judy describes her writing tone as formal, and she avoids using emotional language. As she sees it, AI language sounds the way it does because it borrowed from autistic people first. "On the internet, if you look for things that were written to explain something clearly, a lot of the people who are able to really precisely and clearly explain something are neurodivergent people," she says. "If ChatGPT is trawling the internet and scraping whatever it can find, it is emulating that style." A big chunk of the internet, she argues, could very well be written by autistic writers. "A lot of my friends who are Wikipedia editors are people who have a huge passion for Star Wars, say, and they're going to write a page about every single movie and check it regularly. And that's their autism, but it's also just their writing style." Now it is also ChatGPT's. The particular snarl we're in is new. But some people have been accused of sounding robotic for most of their lives. "I turned 63 this March, and literally this has been going on even before there was AI," a handwriting instructor and remediation consultant I'll call Sarah, who's also autistic, tells me. "I was often accused of being some kind of robot that was running a program." Talking to her, I'm not totally surprised: She speaks in complete paragraphs, articulates every -ing, and draws from a bundle of references including Germanic Viking runes and the Kurt ÂVonnegut story "Harrison Bergeron." She has the loquaciousness of a large-language model, for better and worse. Sarah's social-media accounts are frequently banned; she's tried different tactics for writing under different handles, but she usually ends up getting flagged. "I don't know what I'm doing wrong," she says. What we might call everyday English, with its sentence fragments and misplaced commas, is just not how she writes or talks, and though she'd be willing to adapt, she hasn't figured out how. In the end, to her, all these people making false AI accusations seem to be enacting a simple category error: They sense that something is different, possibly "a little bit off," but when they can't figure out what it is, they decide they must be in the presence of a nonhuman author. "Imagine if a dog thought that a cat was a robotic dog because it didn't quite act or look like a dog," she says. "That's the situation we're in. I'm not a robotic dog; I'm a cat."
Share
Share
Copy Link
The New York Times severed ties with freelance journalist Alex Preston after he used AI to write a book review that incorporated unattributed material from the Guardian. The incident highlights mounting ethical concerns about AI in writing as undisclosed use of artificial intelligence spreads across major publications, threatening reader trust and raising fundamental questions about authorship in the digital age.
The New York Times has cut ties with freelance journalist and author Alex Preston after discovering he used AI in writing to craft a book review that pulled language from a Guardian piece without attribution. A reader flagged similarities between Preston's January 2026 review of Jean-Baptiste Andrea's novel "Watching Over Her" and Christobel Kent's August Guardian review of the same book
5
. The Times called Preston's "reliance on AI and his use of unattributed work by another writer" a clear violation of its journalism standards5
.
Source: The Conversation
Preston told the Guardian he was "hugely embarrassed" and "made a serious mistake in using an AI tool on a draft review I had written, and I failed to identify and remove overlapping language from another review that the AI dropped in"
5
. His apology raises troubling questions about disclosure and the extent of AI's role in creative fields. The incident reveals how generative AI in creative writing can blur lines between inspiration and plagiarism and AI tools, particularly when writers fail to scrutinize what their AI assistants produce.The Preston case isn't isolated. Research by Stony Brook University computer science professor Tuhin Chakrabarty and six colleagues found that AI detection tools flagged likely AI use across U.S. press outlets, including in opinion sections of The New York Times, The Wall Street Journal, and The Washington Post
3
. A "Modern Love" column by Kate Gilgan drew scrutiny after writer Becky Tuch posted an excerpt that "reads EXACTLY like AI slop"3
. When Chakrabarty ran the column through Pangram Labs' AI detector, it estimated more than 60 percent was AI-generated3
.
Source: The Atlantic
Gilgan acknowledged utilizing AI as "a collaborative editor and not as a content generator," prompting ChatGPT, Claude, Copilot, Gemini, and Perplexity for "inspiration and guidance and correction"
3
. This defense highlights a gray area in publishing industry standards: where does acceptable assistance end and problematic AI-generated content begin? The Times' ethical-journalism handbook mandates that "substantial use of generative AI" be clearly disclosed to readers, but what constitutes "substantial" remains undefined3
.The role of literary criticism extends far beyond summarization. "Good criticism thrives in the complexity of its environment," writes critic Jane Howard. "Each review sits in conversation with every other review of a piece of art, with every other review the critic has written"
1
. The critic's emotional and intellectual engagement with art is intrinsic to their role as mediator between artist and audience—a deeply human function that AI cannot replicate1
.
Source: NYMag
When critics use AI, they break an unspoken pact with both writers and readers. Writers assume reviewers have taken time to read and carefully consider their work. Readers trust that published assessments reflect genuine human response and perspective filtered through individual experience
1
. Australian literature academic Julieanne Lamond explains that "when we write reviews we have to do it 'naked'—as individual readers, with a public to judge our judgements"1
.The publishing industry faces an accelerating crisis. Last week, Hachette canceled U.S. publication of the Shy Girl novel after readers flagged prose resembling AI-generated text
3
. Author Andrea Bartz warns of "a rapid erosion of trust between authors and readers" as AI models improve4
. She notes that with fine-tuning, chatbots can eerily mimic published writers' word choices and grammatical patterns. Author James Frey has openly admitted using AI and boasted, "I have asked the AI to mimic my writing style so you, the reader, will not be able to tell what was written by me and what was generated by the AI"4
.Distinguishing human-written from AI-generated text grows harder as models evolve. AI detection tools remain unreliable, producing false positives and varying results across platforms
3
. Pangram's CEO Max Spero acknowledged both challenges exist, warning that percentage estimates of AI content are difficult to determine with certainty3
.Related Stories
The AI debate echoes century-old controversies over ghostwriting, revealing persistent discomfort with words not belonging to the credited author. The term "ghostwriting" first appeared in a 1908 newspaper article describing an anonymous writer paid $5,000 to help a high-society woman write a book
2
. Even when consensual and compensated, ghostwriting occupies an ethical gray area. A 1953 article noted that "forgery" and "ghostwriting" could be used interchangeably by scholars2
.High-end ghostwriters collect mid-six-figure fees, with Prince Harry's ghostwriter J.R. Moehringer reportedly scoring a $1 million advance
2
. Generative AI promises to democratize this service, becoming "the ghostwriter for the masses"2
. Yet concerns about originality and authenticity persist whether assistance comes from humans or machines.The publishing industry must establish clear standards for disclosure before reader trust collapses entirely. Most readers want transparency when AI has been used, and they quickly recognize telltale patterns of large language models
4
. Writers face a stomach-turning new question: "Did you actually write this?"4
. As AI-generated content proliferates, every author risks suspicion, and every book review becomes subject to scrutiny. The Preston incident demonstrates that even established journalists at prestigious publications aren't immune to the temptation—or the consequences—of undisclosed AI use.Summarized by
Navi
[1]
[2]
[3]
01 Apr 2026•Entertainment and Society

10 Apr 2026•Entertainment and Society

24 May 2025•Technology
