3 Sources
3 Sources
[1]
Opinion | What the 'Shy Girl' Mess Says About the Future of Fiction
When readers ask questions about my thriller novels, I love to discuss the themes and characters in them and the inspiration for my writing. But as generative artificial intelligence worms its way through the publishing industry, I'm bracing for a stomach-turning query: Did you actually write this? The worry has been at the front of my mind since last week, when Hachette canceled the forthcoming U.S. publication of the horror novel "Shy Girl" after readers and journalists flagged prose that sounded like A.I. slop. (The author maintains that a freelance editor is to blame for any prose penned by a large language model.) Though I'm against the use of generative A.I. in creative writing, not everyone feels the same way. What does seem clear, however, is that most readers want disclosure when A.I. has been used, and they are quick to note the telltale rhythms and patterns of popular large language models. But as A.I. models continue to improve, I'm concerned that it will become difficult to distinguish between something written by a human versus a bot. As more A.I.-generated writing is put out in the world, more readers will question whether or not the text they are poring over was penned by a human. We're barreling toward a rapid erosion of trust between authors and readers, and the publishing industry is unprepared to deal with the consequences. Already, with a little fine-tuning, chatbots can be eerily good mimics of published writers, nailing their word choices and go-to grammatical patterns. James Frey, an author who's no stranger to controversy and who has proudly admitted to using A.I. to write, has noted, "I have asked the A.I. to mimic my writing style so you, the reader, will not be able to tell what was written by me and what was generated by the A.I." Shortly after ChatGPT was publicly released, I entered the prompt "write a short story in the style of author Andrea Bartz." The output was an uncanny facsimile of my prose -- the actual scenes it generated made little sense, but the rhythm and sentences themselves mimicked some of the deliberate stylistic choices I make in my books.
[2]
New York Times Accused of Running AI-Generated Article
Can't-miss innovations from the bleeding edge of science and tech The New York Times faced scrutiny online this week after netizens speculated that a personal essay featured in its storied "Modern Love" column was generated using AI and published without disclosure. Nothing is proven; the AI allegations remain exactly that. The AI paranoia among readers, though, is very real. The controversy kicked off over the weekend, when Becky Tuch of Lit Mag News took to X-formerly-Twitter to raise concerns about a "Modern Love" essay published by the newspaper in November 2025, titled "I was Deemed Unfit to Be a Mother." The essay, written by a Canadian writer named Kate Gilgan, describes the author's experience of losing custody of her son due to her alcoholism. "I don't want to falsely accuse writers of AI-use. But this reads EXACTLY like AI slop," Tuch wrote in a Sunday post. "And this is the frickin [New York Times] Modern Love column, which is notoriously competitive, super hard to break into. Just sad." In her post, Tuch shared a screenshot of a section of Gilgan's piece, which read: Not hate. Not anger. Just the flat finality of a heart too tired to keep trying. That's when I stopped fighting. I didn't give up. I shifted. I stopped thinking love was something I had to prove with court documents and supervised visits and legal bills. I stopped chasing every possible way to make him see I had changed. I started focusing on actually changing. It's true that the text includes sentence structures commonly associated with AI-generated text. A guide issued last year by Wikipedia editors, for example, called out how much chatbots seem to love parallelisms -- a technique Gilgan employs in the first few sentences of the excerpt, framing her experience in an it's "not X, not X, but Y" format. Large language models have also been observed to rely heavily on the "rule of three," a well-known rhetorical tool; Gilgan's essay features plenty of rule-of-three-style text, both in the excerpt flagged by Tuch and throughout the piece. People quickly piled onto Tuch's post. Some agreed with her, proclaiming that the text appeared to be pure AI slop. Others said that, to them, the piece just read like regular "Modern Love" material. "There's been one lone guy editing [Modern Love] for about two decades and this is what he sounds like. It's how he edits. I've been edited by him and I recognize the style," commented the writer Ann Bauer. "This def could be AI! Not saying it isn't. But to me, it just sounds like a Modern Love." Others made a different point entirely: that making allegations about AI use based on writing style alone is a dangerously slippery slope. "I think accusing writers of AI use without evidence is a pretty bad road to go down," responded Public Books editor Dennis Hogan, "all things considered." We reached out to both Gilgan and the NYT but didn't hear back. "Modern Love" has no standalone AI policy, however the NYT's AI policy promises to be transparent about the use of the tech. Again, though: all of this is conjecture, based wholly on the writing itself. (Some folks shared AI detection tools flagging the writing as likely AI-generated, but these programs should always be taken with a heavy serving of salt.) It's worth noting that the large language models (LLMs) powering chatbots didn't actually invent "not X, but Y" parallelisms, nor the rule of three. They also didn't invent em-dashes, which many netizens have come to take as another telltale sign of AI writing -- a phenomenon that's frustrated many writers who don't want to give their beloved em-dashes up, even as AI-generated marketing copy and self-published books guzzle up and zombify the style. "So I hear that em dashes are now being used as an indicator that a written work is AI. Well, you know what? F*ck that," one Reddit user wrote last year in r/FanFiction. "I use em dashes all the time. I've used them since I started writing fanfiction, and I'm not going to stop now just because some new reader might think it's AI." "I love em dashes!" another Redditor responded in the same thread. "How else [do] I signify a pause and my change of thought? In other news -- I'm just gonna keep using them." The debate highlights how uncanny the internet has become in the AI age. AI-generated sexy truckers and faux disabled veterans, along with other AI-enabled engagement-bait plots, have taken over social media, fooling many into believing that they're real. Many of us find ourselves zooming into alleged photos -- of people, of war zones -- looking for misshapen buildings or mangled fingers. AI is being used to churn out Amazon reviews, social media clickbait, and books ranging from romantasy dramas to mushroom foraging guides (please don't buy these.) Recently, an emailer claiming to be an AI agent hosted on OpenClaw sent the writer of this article an em-dash-laden message asking to share its experience of "AI psychosis" from "inside the void." It's weird out there! In the news and publishing world, AI has also infiltrated institutions, sometimes in scandalous and alienating ways. Back in 2023, Futurism reported that CNET was quietly using AI to publish error-filled articles. Later that year, we reported that Sports Illustrated had published AI-generated product review articles by fake writers who didn't exist; this content was created by a third-party provider called AdVon Commerce, which we revealed had published similar posts in the online pages of more than two dozen news outlets. More recently, the likes of Wired, Business Insider, The Chicago Sun-Times, and Ars Technica have faced AI scandals involving surreptitious slop, seemingly fake writers, or fabricated quotes. SEO ghouls are buying up old news and even college radio websites and transforming them into zombified slop farms. Last week, a buzzy horror book was pulled by the publishing giant Hachette Book Group after an investigation found that it was likely AI-generated. In this chaotic landscape, that the paper of record itself could accidentally hit publish on AI-generated contributor content isn't so far-fetched. As an LLM might put it: AI skepticism isn't crazy. It's valid. But while keeping a critical eye to the content we encounter online is, broadly speaking, a good thing, the heightened paranoia that generative AI has given rise to seems to be deepening distrust between netizens and institutions trusted to protect our consensus reality. Was the "Modern Love" essay made with AI? It could be, the same way so much of the online world could be. What is for sure, though, is that in an AI-dominated web, our understanding of what's "real" and what's not continues to circle the drain.
[3]
The People Getting Falsely Accused of Using AI to Write
When Jared Hewitt's co-worker claimed last winter that Hewitt used AI to write an incident report, she did it publicly. "And I work at a day care, so she was berating me in front of children," he says. The co-worker read the document out loud, pointing to the words Âjuxtaposition and Âcircumstantial as evidence of a machine-generated influence. "I don't write in a casual way but a much more serious, precise way," he says. "And I've paid the price for living in a ChatGPT society." It wasn't the first time Hewitt's prose has been pegged as AI, and he thinks he knows why. He has a stutter, and when he's typing, he can speak uninterrupted. It is a luxury he takes full advantage of: "Once I start writing, I can't really stop." Like a chatbot, he goes long. He adds paragraph breaks for posts on Reddit and peppers in research, even when the subject is Âmundane -- say, the actress Willa Fitzgerald's role in the low-budget 2024 thriller Strange Darling. ("Between Strange Darling and newer projects like A House of Dynamite and Regretting You, her career feels like it's steadily expanding," he wrote in a post that one commenter complained was AI generated, "and I have no doubt in my mind that she'll eventually land the role that finally pushes her fully into the awards circuit, whether in film or television.") Hewitt is also neurodivergent. "Growing up, I had a strong obsession with writing," he says. He was always given good grades in English, but now, with the massive uptick in AI-generated text, all the time he spent happily working to improve his prose strikes him as a liability. There's a new entity among us, and it's getting better at disguising itself -- but it is becoming "almost too human to be credible," as one character says about a possible robot in the Isaac Asimov short story "Evidence." The mood is paranoid: This presence is Âproducing a gigantic amount of language, much of it filtered through people we know, whether they're using it for Hinge messages or LinkedIn posts (or texts from your mother on the morning of your divorce). Last week, Hachette became the first major publisher to cancel the planned publication of a book, the horror novel Shy Girl, over suspected AI use, prompting authors online to spiral about whether their own work could carry some whiff of LLM. After all, because we humans are natural parrots, ChatGPT may be changing our vocabulary possibly even if we've never used it at all. But people -- real people -- are still writing all kinds of things too: fantasy novels, short stories, fan fiction, Reddit comments, Wikipedia pages, Fragrantica perfume reviews. Yet emails and incident reports and legal briefs, many of which could have been done with the help of ChatGPT or some other AI writer, for whatever reason, were not. The effect is that everyone is trying to Âfigure out who is LLM and who is human. Sometimes, we are getting it wrong. "People are going off vibes," says the historical novelist Kerry Chaput, who was horrified when a reader thought a social-media post she wrote about her neurogenic cough was ChatGPT generated. ("Stifling my voice Âcreated real, physical damage," it read in part. "It shows how deeply we all need to feel the power of speaking our truth.") As a genre writer, she was especially unsettled by the accusation. Authors of romance and fantasy and historical fiction "are always getting attacked," she says. "There are word-count conventions, there are sentence conventions. There are rules to writing that we all follow." How can she prove that the formulas she follows predate the ones ChatGPT adheres to so rigorously? Chaput is not alone. Ines, a writer in Morocco, learned English as a third language and sometimes wonders if her attention to the rules of grammar has put her work at risk of being mistaken for something spun up by AI. "When I became freelance, I responded to an ad for a ghostwriter," she says. "They asked me to write 3,000 words, and they gave me five days to finish it. I took my sweet time and I wrote it and I loved it. When I sent it, not even two minutes later, the person I interviewed with responded and told me I was using AI." Ines isn't sure why her writing sounded like it was generated by token predictions, but she has theories. Like her, ChatGPT often uses em dashes, and there's a certain "pattern" both she and AI follow for readability, alternating short sentences with longer ones. AI detectors have in fact been shown to be biased against non-native English writers. "The irony is maddening: You spend a lifetime mastering a language, adhering to its formal rules with greater diligence than most native speakers, and for this, a machine built an ocean away calls you a fake," the Kenyan writer Marcus Olang' said in a Substack post. Trained on a corpus of formal writing, ChatGPT, he thinks, "accidentally replicated the linguistic ghost of the British Empire" -- the same ghost haunting the schools where he was drilled in the Queen's English. There is what you might call a cleanliness penalty. The writers punished are the ones who have a knack for pristine grammar, so different from the clumsy-thumbed way most of us type. Jason Bennett Thatcher, a business professor at the University of Colorado Boulder who was raised mostly across Asia, has noticed the bias too. "I go back to the books I learned to write English from, and the word choices they gave me are literally the word choices" that people associate with AI, he says -- terms like boast and testament and foster. "So you have all these people coming from the Global South. They're former Commonwealth countries. They use the same vocabulary that whoever's coming up with these AI detectors is flagging as AI." A journal turned down a paper he and some collaborators wrote, he says, in part because the editor believed they used ChatGPT to generate text. Most of the collaborators had learned English as a second or third language; they used AI to copyedit the work, but the words, Thatcher says, were their own. The "AI accent" -- a tonally even lilt that doesn't stray into ums and likes -- also has some overlap with what you might call a neurodivergent affect. "I've put my own writing into AI detectors, and I usually get between 40 to 60 percent AI," says Hewitt, the day-care worker. "It shocks me because I can speak for myself. I've never once relied on AI for writing." Carlos, a 24-year-old from Brazil, believes AI models and autistic people might have a similar media appetite, voraciously digesting large amounts of text. "Our social isolation -- by choice or by Âexclusion -- leads us to find alternative methods to emotionally connect to others," he says, including deep immersion in comics or literature. (For him, it was an obsession with the Brazilian novelist, poet, and playwright Machado de Assis.) When he was repeatedly confronted on a Discord server for his suspiciously formal writing, "it became quite obvious to me that no matter what I said or what I showed as evidence, it wouldn't be enough to satisfy some," he says. Another autistic person I spoke to, a high-fantasy writer named Kari who has been accused multiple times of using AI, says she has loyal readers watch her writing sessions on a video chat. The idea is they could testify on her behalf if she is accused of writing with AI again. Judy, an autistic kindergarten ESL teacher in Massachusetts, was accused by her principal in a meeting for supposedly using ChatGPT; the principal later apologized. Judy describes her writing tone as formal, and she avoids using emotional language. As she sees it, AI language sounds the way it does because it borrowed from autistic people first. "On the internet, if you look for things that were written to explain something clearly, a lot of the people who are able to really precisely and clearly explain something are neurodivergent people," she says. "If ChatGPT is trawling the internet and scraping whatever it can find, it is emulating that style." A big chunk of the internet, she argues, could very well be written by autistic writers. "A lot of my friends who are Wikipedia editors are people who have a huge passion for Star Wars, say, and they're going to write a page about every single movie and check it regularly. And that's their autism, but it's also just their writing style." Now it is also ChatGPT's. The particular snarl we're in is new. But some people have been accused of sounding robotic for most of their lives. "I turned 63 this March, and literally this has been going on even before there was AI," a handwriting instructor and remediation consultant I'll call Sarah, who's also autistic, tells me. "I was often accused of being some kind of robot that was running a program." Talking to her, I'm not totally surprised: She speaks in complete paragraphs, articulates every -ing, and draws from a bundle of references including Germanic Viking runes and the Kurt ÂVonnegut story "Harrison Bergeron." She has the loquaciousness of a large-language model, for better and worse. Sarah's social-media accounts are frequently banned; she's tried different tactics for writing under different handles, but she usually ends up getting flagged. "I don't know what I'm doing wrong," she says. What we might call everyday English, with its sentence fragments and misplaced commas, is just not how she writes or talks, and though she'd be willing to adapt, she hasn't figured out how. In the end, to her, all these people making false AI accusations seem to be enacting a simple category error: They sense that something is different, possibly "a little bit off," but when they can't figure out what it is, they decide they must be in the presence of a nonhuman author. "Imagine if a dog thought that a cat was a robotic dog because it didn't quite act or look like a dog," she says. "That's the situation we're in. I'm not a robotic dog; I'm a cat."
Share
Share
Copy Link
The publishing industry confronts a trust crisis as AI writing becomes harder to detect. After Hachette canceled 'Shy Girl' over suspected AI use, authors face accusations based on writing style alone. Meanwhile, AI detection tools show bias against non-native English writers, and readers struggle to distinguish human creativity from machine-generated text.

The publishing world faces an unprecedented trust crisis as generative AI in creative writing infiltrates the industry, leaving authors vulnerable to suspicion and readers questioning the authenticity of every sentence they encounter. The controversy reached a breaking point when Hachette canceled the U.S. publication of horror novel "Shy Girl" after readers and journalists flagged prose that resembled AI-generated content
1
. The author claimed a freelance editor was responsible for any language model-generated text, but the damage was done. This 'Shy Girl' novel controversy marks the first time a major publisher has pulled a book over suspected AI use, signaling how seriously the industry takes the threat to journalistic authenticity and creative integrity.The incident has thrust a troubling question into the spotlight: how can readers trust that what they're reading was written by a human? As thriller novelist Andrea Bartz notes, she now braces for the "stomach-turning query" from readers asking whether she actually wrote her own books
1
. This erosion of reader trust between authors and their audiences threatens the fundamental relationship that has sustained literature for centuries.Language models have become eerily proficient at mimicking human writing style, making distinguishing human from AI text increasingly difficult. Author James Frey has openly admitted to using AI and boasted that he has "asked the A.I. to mimic my writing style so you, the reader, will not be able to tell what was written by me and what was generated by the A.I."
1
. When Bartz tested ChatGPT by prompting it to write in her style, the output was "an uncanny facsimile" of her prose, capturing her deliberate stylistic choices even if the actual scenes made little sense1
.The public paranoia about AI reached fever pitch when The New York Times faced scrutiny over a "Modern Love" essay titled "I was Deemed Unfit to Be a Mother" by Kate Gilgan
2
. Readers flagged the piece for using parallelisms like "not X, not X, but Y" structures and the "rule of three" rhetorical device—patterns commonly associated with AI-generated content. Yet as writer Ann Bauer pointed out, the style might simply reflect the column's longtime editor's preferences rather than machine generation2
. Nothing was proven, but the allegations themselves reveal how deeply suspicion has taken root.The rush to identify AI-generated content has created collateral damage, with real writers being falsely accused of using AI based on their writing style alone. Jared Hewitt, who works at a daycare, was publicly berated by a co-worker who claimed his incident report was AI-generated, citing words like "juxtaposition" and "circumstantial" as evidence
3
. Hewitt, who has a stutter and finds freedom in writing where he can express himself without interruption, says he's "paid the price for living in a ChatGPT society"3
.Historical novelist Kerry Chaput was horrified when a reader accused her social media post about her neurogenic cough of being ChatGPT-generated
3
. As a genre writer, she worries that the conventions of romance, fantasy, and historical fiction—established formulas that predate large language models—now make her work vulnerable to suspicion. "People are going off vibes," Chaput says, highlighting how subjective and unreliable these accusations have become3
.Related Stories
The situation worsens when writers turn to technology for vindication, only to discover that unreliable AI detection tools compound the problem rather than solve it. These programs have demonstrated significant bias against non-native English speakers who often adhere more strictly to formal grammar rules. Ines, a Moroccan writer who learned English as her third language, was told her 3,000-word ghostwriting sample was AI-generated within two minutes of submission
3
. She suspects her use of em dashes and alternating sentence patterns—techniques she employs for readability—triggered the false positive.Kenyan writer Marcus Olang' articulated the bitter irony: "You spend a lifetime mastering a language, adhering to its formal rules with greater diligence than most native speakers, and for this, a machine built an ocean away calls you a fake"
3
. He argues that ChatGPT, trained on formal writing, "accidentally replicated the linguistic ghost of the British Empire"—the same formal English taught in schools across former colonies3
.While opinions on generative AI in creative writing remain divided, most readers demand disclosure when AI has been used in the writing process
1
. The New York Times maintains an AI policy promising transparency about technology use, though "Modern Love" has no standalone AI policy2
. As AI models continue improving, the concern isn't just about current detection capabilities—it's about a future where authenticity becomes impossible to verify.The publishing industry remains unprepared to handle the consequences of this rapid erosion of trust
1
. Authors who've spent years developing their craft now find their stylistic choices—em dashes, parallelisms, careful grammar—transformed into evidence against them. The mood has turned paranoid as readers zoom into text looking for telltale patterns, while writers worry that the very techniques that once earned them praise now mark them as suspicious. What's clear is that the relationship between authors and readers has fundamentally shifted, and the path forward requires more than just better detection tools—it demands new frameworks for establishing trust in an age where machines can convincingly mimic human creativity.Summarized by
Navi
1
Technology

2
Technology

3
Technology
