2 Sources
2 Sources
[1]
How Authors and Readers Feel About the 'Shy Girl' Cancellation
Major publishing houses risk unwittingly putting out books generated with A.I. tools. Authors and readers are frustrated, nervous and grasping for solutions. Last fall, Antonio Bricio, an engineering consultant who lives in Guadalajara, Mexico, finished a draft of his first novel, a science fiction thriller about a government conspiracy to bury the history of humanity's first contact with alien refugees. After querying 20 literary agents and getting a string of rejections, he spent several months furiously revising it in hopes of one day landing a publisher. Now, Bricio worries that the already taxing process of getting a publishing deal as a debut author has become even more fraught. He fears that agents and publishers will avoid taking risks on unknown authors over concerns that they might have written the book using artificial intelligence. The panic and paranoia over A.I.-generated books exploded last month, when a major publisher, Hachette, decided to cancel the release of a horror novel, "Shy Girl," by Mia Ballard, in the United States over evidence suggesting that it had been partly produced by A.I. Hachette also pulled the book in the United Kingdom, where it released "Shy Girl" last year after Ballard initially self-published it. When Bricio learned about the novel's cancellation on social media, his stomach dropped. He said he does not use A.I. to write, except to occasionally translate a stray word or phrase from his native Spanish into English, in which he is also fluent, using the A.I. translation program DeepL. But he wondered what an A.I. detector would say about his work. So he paid for a subscription to Originality.ai and uploaded a chapter of his novel. The detector was 100 percent confident that he had used A.I. in some way. Bricio searched for the phrases that had tripped up the detector, deleted some sentences and reran it. This time, the program said it was 100 percent certain that a human had written it. Eventually, Bricio had a chat conversation with a customer service representative, who told him that if he received results that incorrectly flagged his work as A.I.-generated, he might need a different model of the program. The back and forth only left Bricio more unsettled. The Originality.ai reports on his draft, which he shared with The Times, showed that adding or deleting even just a few sentences produced wildly different results. "What if publishers or agents start running these A.I. tools on everybody?" Bricio said. "Everybody is going to walk on eggshells from now on." As the publishing industry wrestles with the intrusion of A.I. into nearly every aspect of the business, there seems to be little consensus over what publishers can or should do to regulate how writers use the technology. But many agree that the current state of affairs is untenable. A growing number of writers face unfounded suspicions of A.I. use. Others use A.I. without disclosing it. Many readers feel confused and wary, not knowing whether the books they're reading were written by a human or a machine. Quite a few self-published authors have been called out for obvious A.I. use and pilloried by readers and fellow writers as a result. But the "Shy Girl" controversy could prove to be a turning point for the entire book business. In the wake of the novel's cancellation, many readers and authors questioned how a major publishing company failed to catch signs of A.I. writing. Commenters on Goodreads and Reddit had complained for months about what they called obvious evidence of chatbot language. The scandal has prompted some readers to question how much publishing houses vet the work they acquire. "We're reaching this era of distrust, with no easy way to prove the veracity of your own writing," said Andrea Bartz, a thriller writer who was a lead plaintiff in the class-action lawsuit brought by authors against Anthropic, which agreed to a $1.5 billion settlement. Bartz recently put some of her own writing into Ace, an A.I. checker, and was startled when the program labeled her work as 82 percent A.I.-generated. The program then offered her a solution: "Would you like to humanize your text?" When Bartz wrote about her experience on Substack, dozens of writers chimed in. "I guess that's what happens when your books were stolen to program A.I.," the novelist Rene Denfeld commented, noting that an A.I. detection program had also falsely determined some of her writing to be A.I.-generated. "It's got to be a wake-up call for the industry," said Jane Friedman, a publishing consultant. Most major publishing houses don't have clear-cut rules around A.I. use for authors, operating instead on trust and the expectation that writers will be transparent. But with the many ways A.I. is seeping into book creation, from research to editing to composing sentences, there is confusion over which forms of A.I. use cross a line -- and a heightened fear that A.I. writing can, and will, steal past professional editors. When Rachel Louise Atkin, who reviews books on Goodreads, Instagram and TikTok for thousands of followers, first heard about "Shy Girl" on social media, it sounded like a book she would love -- a gripping and twisted feminist horror story. She devoured the book in a day and recommended it widely. She said she was shocked to learn that it had been pulled over evidence of A.I. use. "If I knew for definite that something was written with A.I., I would have avoided it," she said. "I think we should be able to make the choice if we want to read something that was written with A.I. or not." The book influencer Stacy Smith found "Shy Girl" on NetGalley, a site where readers can access books to review ahead of publication, and gave it a five-star review on Goodreads. She, too, was dismayed to learn of the accusations. "I would read books written with A.I., but I would like to know they were written by A.I.," she said. "It's the dishonesty that hurts." Authors, meanwhile, often feel threatened from all sides. The ever increasing number of books published each year, including those written with A.I., makes it more difficult for writers to find an audience in a fractured and oversaturated entertainment marketplace. On top of that, authors who steer clear of A.I. now feel pressure to prove their human bona fides -- with no great options for doing so short of live-streaming as they type. Some writers are adding a logo to their books and websites that says "human authored." The certification, offered by the Authors Guild, allows authors to attest that they wrote their books without using A.I. to generate or substantially shape prose. While the guild does not independently verify authors' claims, writers may be subject to trademark violation suits if they violate the logo's terms of use. A.M. Dunnewin, a self-published author of horror novels, registered for the certification and put the symbol on her website: "I thought, maybe having that certificate could be a safety net, letting people know that it's my work." Sarina Bowen, an author who has self-published some books and released others with major publishers, was accused of using A.I. to create the cover for one of her novels. It was a charge she easily refuted; the novel was published years before generative A.I. went mainstream. But now, she worries about the cover art she sources online -- which is a fairly common practice among self-published authors -- and whether an artist might have used the technology to produce it. "I don't know where we go from here, but that moment when I was accused of having an A.I. cover was really a downer," she said. "Everyone who publishes books is swimming in this world where we can't be sure of the origin of our content." Readers who picked up "Shy Girl" were among the first to spot signs of A.I. generation in its pages, and they clearly didn't like it. Some writers said they found that encouraging. "If they are going to spend money on a book, they want it to come from the author's brain and heart and not a computer that's robbed the writer's brain," said Laura Taylor Namey, who writes young adult fiction. "I applaud that." But others fear that more A.I.-generated books will slip through the cracks. And as technology improves, the telltale signs of chatbot prose might disappear. "I'm really not looking forward to the day when readers can't tell the difference," Bowen said.
[2]
We Talked to a Writer Accused of Publishing An AI-Generated Essay in The New York Times
Can't-miss innovations from the bleeding edge of science and tech Was AI used to produce a personal essay that wound up in the pages of the New York Times? The answer is complicated. The writer Kate Gilgan found herself at the center of a literary scandal last month when, on social media, another writer accused her of using AI to write an emotional first-person essay about the experience of losing custody of her young son at the height of her alcoholism. The piece had been published in the NYT's famously competitive "Modern Love" column back in October; the accusations were made without any hard evidence, and the writer who accused Gilgan of using AI, The Lit Mag's Becky Tuch, pointed only to the style of Gilgan's article as evidence. Others quickly piled on, and soon much of literary social media was swarming with speculation and analyses via AI content detectors (which, we should note, are known to be unreliable.) Gilgan is pretty offline, she told Futurism -- so it wasn't until journalists started asking her about the controversy that she realized there was one at all. "I'm actually not on Twitter or X or whatever that is," said Gilgan, who spoke to us from her home in the Western Canadian province of Saskatchewan. But she "wasn't that worried," she said, "because AI wasn't used to generate that content." That contention, it turns out, is a bit semantic. As Gilgan conceded to The Atlantic, she did make use of a variety of chatbots -- ChatGPT, Claude, Copilot, and Perplexity -- for conceptualizing and editing the piece, though she denied copying and pasting anything directly from an AI into her essay. The situation, in other words, is messy. Though the AI accusations against her were unsubstantiated at first -- they were based simply on certain rhetorical devices that chatbot-generated writing is known to favor, and which the public is clearly starting to be on the lookout for -- it turned out that readers were right to be suspicious, since AI did have a prominent hand in the creation of the piece. The controversy comes at an intensifying moment for the literary world's ongoing struggle with AI. Institutional scandals continue to abound -- within the same two-week span as the allegations against Gilgan emerged, the publishing giant Hachette pulled a buzzy new horror novel over suspicion of substantial AI use, and the NYT cut all ties with a book critic after it was discovered that his usage of AI had resulted in the newspaper publishing a significantly plagiarized book review -- while some writers and journalists are starting to open up about their sometimes very extensive use of AI. To unpack it all, I wanted to talk to Gilgan myself -- about how she used AI, what it means when a machine becomes a collaborator in the creative process, and where writers should draw the line. In an interview, Gilgan maintained that the idea that she published AI slop in "Modern Love" is false. But she did use chatbots to help her craft a piece specifically for publication in the column, and there's no question that it ended up with the distinctive argot of AI. One thing was clear: AI use has turned into one of the most contentious topics in the literary community. "I was going back and reading a lot of my earlier pieces -- I guess, maybe intuitively, I was wondering, 'Oh, my God, has that happened? Has AI changed my voice?'" Gilgan told me. But "I don't think I actually worried about it, because I haven't used it to that extent." *** Gilgan started taking getting published seriously about ten years ago, she told us, writing about extremely personal topics like an extramarital affair she'd had and her family's experience of being trapped in Bali during the pandemic. And even before that, about 15 years ago, she tried -- and failed -- to write a memoir about the same experience she later explored in her "Modern Love" piece: losing custody of her young son due to alcoholism. The problem? It wasn't any good, she said. "It was so full of self-pity and histrionic emotional grandeur; it was just awful," said Gilgan. "And so I stopped writing it and set it down... it just wasn't working." A few years ago, she decided she wanted to revisit the custody battle and her subsequent path to sobriety, but this time as a novel. "It gave me more freedom," said Gilgan. She finally finished her first draft about a year ago; the non-fiction essay published in "Modern Love," she says, was born from that. "This essay then came out of that novel," Gilgan said. Distilling it into a shorter essay, she thought, might help her get her book published. "I thought, 'Okay, I'm going to try and leverage this. I'm going to try and market the essay to try and help bring my book to publication.'" Gilgan was strategic. She turned to chatbots, which she says she started playing around with about two or so years ago, to help her craft her essay in a way that she believed would appeal to the NYT's "Modern Love" editorial staff. "Rather than sitting on Google reading through tons of other people's articles about how to get published in 'Modern Love' and 'here's what Dan Jones looks for,'" said Gilgan, referring to the column's longtime editor, "I asked AI, 'Okay, boil this down for me. Take everything -- every scrap of information on the internet that you can find -- to help me get this essay published in the Times.'" Gilgan used a mix of chatbots throughout the process, she said. "We homeschool our kids, so we've always got laptops open around the house," she explained. "One will have ChatGPT on it, and one will have Copilot on it. Or if I'm using my cell phone, whatever happens to be on it is what I'll use. I don't have any go-tos." Though she holds that she didn't use AI to generate any "new ideas," as she put it, she says she did lean on the tech as a "first reader," by running and re-running her writing through chatbots and asking questions about how best construct her piece to suit her mission: publication. "One of the bits of feedback I got from AI was, 'Okay, you're going to have to really focus on a tight story arc.' Okay, I need to do that. So if I get that feedback, I go back to my essay and I start rewriting, start shifting things around," said Gilgan. "There were a lot of questions I asked it about, 'Does this sound too histrionic? Am I just making my case that my ex-husband was the only problem?'" "I used it to help me stay rational and unemotional about a really emotional topic," she continued, adding that there's a "fine line in writing first-person narrative where you're relatable but you're not 'terminally unique' in your emotions -- I used [AI] to help me balance that." (In that way, Gilgan said, she leaned on AI the same way she asked questions of her Alcoholics Anonymous sponsor throughout the writing process; chatbots, she said, are almost like having her sponsor "available, on my phone with me" at all times.) This process, however, led to some accusing Gilgan of smuggling full-on undisclosed AI slop into the pages of the paper of record. Asked what she makes of this indictment of her writing style, and if she believes leaning on AI for the "Modern Love" piece significantly altered her voice as a writer, she laughed that the backlash simply speaks to her technical ability -- and insisted that while her writing style has "matured" since she was first published in 2017, she doesn't believe AI has fundamentally transformed her writing. "At first it was like, 'Oh my god,'" she recounted. "And then it's like, 'But I'm just a technically proficient writer.'" "One of the issues seems to be things around disclosure: 'How much was AI used? Did it generate content? My direct answer to that question is: no more so than an editor would generate content for me," Gilgan contended. "An editor is going to realistically rewrite a sentence or two for me. They're not going to insert a sentence into my piece, but they are going to rephrase. They're going to shift the wording. They're going to use some synonyms in there, that sort of thing. But they're not going to come up with a sentence all on their own. And it was the same with this." *** In March, asked about the online controversy stemming from Gilgan's "Modern Love" essay, the NYT told Futurism that journalism at the newspaper "is inherently a human endeavor," and "that will not change." "As technology evolves, we are consistently assessing best practices for our newsroom," a spokesperson for the paper added. When we first started emailing, Gilgan referred to AI as a new "tool" in her workflow. She also compared AI to using a typewriter instead of a computer, or relying on a thesaurus. When I suggested that many writers might recoil from the characterization of chatbots as a "tool" like any other, given that it does have the capacity to both wholesale generate and drastically transform a piece of text in a radically different way than any previous technologies have been capable of, she insisted that AI can't replace the role of a human editor. And if she didn't want to actually write, she added, she just wouldn't be a writer. "Is there a risk with AI? Absolutely," said Gilgan. "If I want to be lazy about my writing, yeah -- AI could do it all for me." But for the sake of her own sense of integrity, she added, "I hope I don't ever get that lazy that I just hand it over to AI."
Share
Share
Copy Link
The publishing world is grappling with an escalating AI crisis after Hachette pulled the horror novel 'Shy Girl' and The New York Times writer Kate Gilgan faced accusations of using AI for her 'Modern Love' essay. Unreliable detection tools are flagging human-written content as AI-generated, while some authors use chatbots without disclosure, creating widespread distrust among readers and anxiety for debut writers seeking representation.
The publishing industry is facing an unprecedented reckoning with AI in writing as major scandals expose deep vulnerabilities in how books and essays reach readers. In March, Hachette made headlines by canceling the release of "Shy Girl," a horror novel by Mia Ballard, in the United States over evidence suggesting it had been partly produced by AI
1
. The publisher also pulled the book in the United Kingdom, where it had been released last year after Ballard initially self-published it. Commenters on Goodreads and Reddit had complained for months about what they called obvious evidence of chatbot language, raising questions about how a major publishing company failed to catch signs of AI writing during its vetting process1
.
Source: NYT
Almost simultaneously, The New York Times found itself embroiled in author controversy when writer Kate Gilgan faced accusations of using AI to produce a personal essay published in the prestigious Modern Love column
2
. The accusations, initially made without hard evidence by writer Becky Tuch, pointed only to the style of Gilgan's article as proof. As Gilgan later conceded to The Atlantic, she did use ChatGPT, Claude, Copilot, and Perplexity for conceptualizing and editing the piece, though she denied copying and pasting anything directly from an AI into her AI-generated essay2
. The situation highlights the murky territory of AI's role in creative writing, where the line between assistance and authorship becomes increasingly difficult to define.
Source: Futurism
The fallout from these scandals has created a climate of fear and confusion, particularly for debut authors. Antonio Bricio, an engineering consultant in Guadalajara, Mexico, who finished his first science fiction novel last fall, represents countless writers now caught in this uncertainty. After learning about the "Shy Girl" cancellation on social media, Bricio tested his own work using Originality.ai, an AI detection tool
1
. The detector initially showed 100 percent confidence that he had used AI, despite Bricio only using DeepL to translate occasional words from Spanish to English. After deleting some sentences and rerunning the test, the program reversed its verdict to 100 percent human-written. "What if publishers or agents start running these A.I. tools on everybody?" Bricio said. "Everybody is going to walk on eggshells from now on"1
.These unreliable detection tools are creating widespread distrust among readers and authors alike. Andrea Bartz, a thriller writer who was a lead plaintiff in the class-action lawsuit against Anthropic that resulted in a $1.5 billion settlement, recently tested her own writing in Ace, an AI checker. The program labeled her work as 82 percent AI-generated and then offered to "humanize" her text
1
. When Bartz shared her experience on Substack, dozens of writers responded with similar stories. Novelist Rene Denfeld commented, "I guess that's what happens when your books were stolen to program A.I.," noting that detection programs had also falsely flagged her human-written content1
.The literary community debate has intensified as writers, publishers, and readers struggle to establish clear boundaries. Most major publishing houses don't have clear-cut guidelines for AI use, operating instead on trust and the expectation that writers will be transparent about their methods
1
. But with AI seeping into multiple aspects of book creationβfrom research to editing to composing sentencesβconfusion reigns over which forms of AI use cross ethical lines.Kate Gilgan's case illustrates this ambiguity. She told Futurism that she wasn't worried when the accusations first emerged "because AI wasn't used to generate that content"
2
. Yet she acknowledged using multiple chatbots strategically to craft her essay in a way that would appeal to The New York Times editorial staff. Gilgan had been writing seriously for about ten years, covering deeply personal topics, and had attempted to write about her custody battle and alcoholism 15 years earlier but found it "full of self-pity and histrionic emotional grandeur"2
. She turned to AI tools to help refine her approach for the Modern Love column, believing it would help market her unpublished novel.Related Stories
The immediate impact is already visible: debut authors face heightened scrutiny, readers question the authenticity of published works, and the vetting process at major publishers is under intense examination. "We're reaching this era of distrust, with no easy way to prove the veracity of your own writing," said Andrea Bartz
1
. Publishing consultant Jane Friedman called the situation "a wake-up call for the industry"1
.Longer-term questions loom about editorial integrity and how the industry will adapt. Will publishers implement mandatory disclosure policies? How can they distinguish between acceptable AI assistance and problematic generation? The current approachβrelying on trust without verificationβappears increasingly untenable as AI tools become more sophisticated and accessible. Writers are opening up about their sometimes extensive use of AI, but without industry-wide standards, each revelation sparks fresh controversy. The fear that AI writing can steal past professional editors threatens to undermine confidence in traditional publishing's quality control, potentially accelerating the existing crisis of trust between authors, publishers, and readers in an already fragile ecosystem.
Summarized by
Navi
25 Mar 2026β’Entertainment and Society

24 May 2025β’Technology

23 Oct 2024β’Science and Research
