Curated by THEOUTPOST
On Sat, 4 Jan, 8:03 AM UTC
3 Sources
[1]
Book App Alarmed as Its AI Starts Mocking Users for Reading Books by Women and Minorities
The popular book app Fable has come under fire for its AI-generated annual roundups, which some users say gave them offensive messages about race and gender, Wired reports. These days, every app imaginable is deploying its own take on the wildly popular Spotify Wrapped feature by providing personalized end-of-year recaps for users' consumption habits. Hopping on the trend, Fable tried to stand out by using AI to "playfully roast" its readers. In some cases, however, the AI veered into overly edgy territory -- in what appears to be an all-too-common case of a large language model defying its guardrails. The summary for Fable user Tiana Trammell, for example, praised her for being a "soulful explorer" of "Black narratives," before about-facing completely: "Don't forget to surface for the occasional white author, okay?" "I typically enjoy my reader summaries from [Fable], but this particular one is not sitting well with me at all," Trammell wrote on Threads. Other users reported similar cases of the AI striking an almost comically reactionary tone. One writer's Fable summary called him a "diversity devotee" and asked if he's "ever in the mood for a straight, cis white man's perspective." Trammell says she's also seen other people whose summaries snidely commented on "disability and sexual orientation." It's unclear how widespread these cases were. But Fable has responded to the complaints by issuing a formal apology. "To our community: we are deeply sorry for the hurt caused by some of our Reader Summaries this week," the company posted on Threads. "We promise to do better." What "doing better" looks like, if you're wondering, is tinkering with its AI model. "For the time being, we have removed the part of the model that playfully roasts the reader, and instead, the model simply summarizes the user's taste in books," Kimberly Marsh Allee, Fable's head of community, told Wired. Some users, though, would prefer to see the AI's proverbial head roll. "They need to say they are doing away with the AI completely," fantasy romance author A.R. Kaufer told Wired. "This 'apology' on Threads comes across as insincere, mentioning the app is 'playful' as though it somehow excuses the racist/sexist/ableist quotes." Fable's decision to muzzle its AI raises serious questions about the tech's usefulness for major brands. Surely AI's novelty is that it can whip up pithy remarks that sound like human speech. If it's not trustworthy enough to perform that function without making a serious faux pas at every turn, then why should brands trust it with their customers? And doesn't limiting these bots to become dry summarizers defeat the point of what makes them appealing in the first place?
[2]
A Book App Used AI to 'Roast' Its Users. It Went Anti-Woke Instead
Fable, a popular social media app that describes itself as a haven for "bookworms and bingewatchers," created an AI-powered end-of-year summary feature recapping what books users read in 2024. It was meant to be playful and fun, but some of the recaps took on an oddly combative tone. Writer Danny Groves's summary for example, asked if he's "ever in the mood for a straight, cis white man's perspective" after labeling him a "diversity devotee." Books influencer Tiana Trammell's summary, meanwhile, ended with the following advice: "Don't forget to surface for the occasional white author, OK?" Trammell was flabbergasted, and she soon realized she wasn't alone after sharing her experience with Fable's summaries on Threads. "I received multiple messages," she says, from people whose summaries had inappropriately commented on "disability and sexual orientation." Ever since the debut of Spotify Wrapped, annual recap features have become ubiquitous across the internet, providing users a rundown of how many books and news articles they read, songs they listened to, and workouts they completed. Some companies are now using AI to wholly produce or augment how these metrics are presented. Spotify, for example, now offers an AI-generated podcast where robots analyze your listening history and make guesses about your life based on your tastes. Fable hopped on the trend by using OpenAI's API to generate summaries of the past 12 months of their reading habits for its users, but it didn't expect that the AI model would spit out commentary that took on the mien of an anti-woke pundit. Fable later apologized on several social media channels, including Threads and Instagram, where it posted a video of an executive issuing the mea culpa. "We are deeply sorry for the hurt caused by some of our Reader Summaries this week," the company wrote in the caption. "We will do better." Kimberly Marsh Allee, Fable's head of community, told WIRED the company is working on a series of changes to improve its AI summaries, including an opt-out option for people who don't want them and clearer disclosures indicating that they're AI-generated. "For the time being, we have removed the part of the model that playfully roasts the reader, and instead, the model simply summarizes the user's taste in books," she says. For some users, adjusting the AI does not feel like an adequate response. Fantasy and romance writer A.R. Kaufer was aghast when she saw screenshots of some of the summaries on social media. "They need to say they are doing away with the AI completely. And they need to issue a statement, not only about the AI, but with an apology to those affected," says Kaufer. "This 'apology' on Threads comes across as insincere, mentioning the app is 'playful' as though it somehow excuses the racist/sexist/ableist quotes." In response to the incident, Kaufer decided to delete her Fable account. So did Trammell. "The appropriate course of action would be to disable the feature and conduct rigorous internal testing, incorporating newly implemented safeguards to ensure, to the best of their abilities, that no further platform users are exposed to harm," she says.
[3]
Fable embroiled in controversy over offensive AI reader summaries. What happened?
Book tracking app Fable will remove its popular AI features after the platform generated some reader summaries that were offensive to race, gender, sexuality and disability. Fable's annual reading summaries - similar in style to Spotify Wrapped - are intended to be a "playful, fun way" to celebrate readers' "uniqueness," said Chris Gallello, head of product, in a video posted to social media. Tiana Trammell was among those who got a controversial summary. When she unwrapped her annual synopsis, she found a summary that suggested she, a Black reader, prioritize white authors more. "Your journey dives deep into the heart of Black narratives and transformative tales, leaving mainstream stories gasping for air. Don't forget to surface for the occasional white author, okay?" the summary reads. Other users have posted their Fable reader summaries that said the disability narratives they read "could earn an eye-roll from a sloth" and disparaging rom-com reads as setting "the bar for my cringe-meter." Writer Danny B. Groves' summary called him a "Diversity Devotee" then continued, "Your bookshelf is a vibrant kaleidoscope of voices and experiences, making me wonder if you're ever in the mood for a straight, cis white man's perspective!" The comment was disorienting for Groves. He felt pride for his dedicated effort to diverse reading, but also reminiscent of the years he spent not reading because he couldn't find books that represented him as a Black, gay man. "You wouldn't expect to see that sort of line on someone's readership that does not read diversly," he tells USA TODAY. "They wouldn't say 'Are you ever in the mood for a gay, trans, Black woman's perspective?'" Fable apologizes after 'very bigoted' reader summaries, blames AI In the first of two videos posted to Fable's account, Gallello noted some changes to AI disclosure and opt-outs, saying the "very bigoted" reader summaries were a shock to the Fable team. Fable's use of AI was not meant to be a "surprise or deceptive," to users, Gallello said. Gallello said the company had implemented safeguards and an offensive language filter: "Clearly in both cases, that failed this time around," he said. "So I think as a company, we kind of underestimated how much work needs to be done to make sure that these AI models ... are doing this in a responsible, safe way." USA TODAY has reached out to Fable for comment. Trammell told USA TODAY she didn't have a problem with Fable's AI summaries before this summary. One she received in December made her feel seen as a reader: "Your bookshelf radiates with a quest for joy, justice, and the power of personal journeys." But she believes more internal testing should be done to make sure this doesn't happen again. For other users, the use of AI was the sticking point. "I think it's a massive disservice to rely heavily on AI," one Instagram user commented. "Especially with the readership community since a lot of us think it's harmful for the entire reading experience all the way from authors, editors, readers and general community members." Groves agrees: "I recognize that Fable is a small team, and as a result of that ... they're likely unable to keep up with a review of every individual's reader summary. But if that's the case, then there shouldn't be an AI algorithm that's immediately pushing out content or generating an output that can create harm." In a second video, posted Friday evening, Fable said it would remove three key features that utilize AI. "Having a feature that does any sort of harm in the community is unacceptable," Gallello said. Some users consider leaving Fable after AI blunder A popular alternative to Goodreads, Fable is beloved for its social function that allows readers to join book clubs and chat candidly about titles. Because of the unique community feature, Groves says he will be staying on Fable but will prioritize other apps like Storygraph. "I'm willing to sacrifice a reader summary until there's a team in place that can better moderate the outputs from this AI system," he says. Across social media, some users have shared their desire to abandon the app altogether. At the time of publication, Trammell says she's been disappointed with the email responses she's received from Fable, which cited the apology video and only happened after her post gained traction. She deactivated her account after the first video went live. "I don't desire to restore a relationship with the app at this point," Trammell said. "But for people who are opting to stay on the site and people who in the future may sign up, they don't need to be subjected to that and it's their responsibility as a company to ensure that doesn't happen." Groves also hopes to see Fable use the incident as a catalyst to highlight diverse reading. "(If) your platform has become a place where people have experienced harm and as a result of that, people are fleeing because they don't want to experience harm again, then maybe take a risk and tailor your focus, your interests and your platform to center more of those diverse stories," Groves says. Clare Mulroy is USA TODAY's Books Reporter, where she covers buzzy releases, chats with authors and dives into the culture of reading. Find her on Instagram, check out her recent articles or tell her what you're reading at cmulroy@usatoday.com.
Share
Share
Copy Link
Fable, a popular book app, faces backlash after its AI-powered annual reader summaries produced offensive comments about race, gender, and diversity, leading to user outrage and the removal of AI features.
Fable, a popular book tracking and social media app for readers, has found itself embroiled in controversy after its AI-generated annual reader summaries produced offensive content. The app, which describes itself as a haven for "bookworms and bingewatchers," attempted to create personalized end-of-year recaps for its users, similar to the widely popular Spotify Wrapped feature 1.
Several users reported receiving summaries that contained inappropriate comments about race, gender, sexuality, and disability. For instance, Tiana Trammell, a Black reader, was advised to "surface for the occasional white author" 2. Writer Danny Groves, labeled a "diversity devotee," was asked if he was "ever in the mood for a straight, cis white man's perspective" 1.
The AI's comments struck an almost comically reactionary tone, with some summaries snidely commenting on disability and sexual orientation. This unexpected turn of events left many users feeling hurt and betrayed by an app they had previously enjoyed 3.
In response to the growing backlash, Fable issued formal apologies across various social media platforms. Chris Gallello, head of product, acknowledged in a video that the "very bigoted" reader summaries were a shock to the Fable team 2.
Kimberly Marsh Allee, Fable's head of community, announced several changes to improve the AI summaries:
In a follow-up video, Fable announced the removal of three key features that utilize AI, acknowledging that "having a feature that does any sort of harm in the community is unacceptable" 2.
The incident has led some users to delete their Fable accounts, while others are considering alternative platforms. A.R. Kaufer, a fantasy and romance writer, expressed dissatisfaction with Fable's response, calling for a complete removal of AI from the platform 1.
Trammell, who deactivated her account, suggested that Fable should "disable the feature and conduct rigorous internal testing" before reintroducing any AI-powered features 1.
This controversy raises serious questions about the readiness of AI for consumer-facing applications. While AI's novelty lies in its ability to generate human-like responses, the Fable incident demonstrates the potential risks when these systems fail to adhere to ethical guidelines 3.
As companies increasingly turn to AI to enhance user experiences, the Fable controversy serves as a cautionary tale, highlighting the need for robust testing, clear disclosure, and careful consideration of AI's role in consumer products.
Public libraries are grappling with an influx of AI-generated books in their digital catalogs, leading to concerns about content quality, resource allocation, and the potential misleading of readers.
2 Sources
2 Sources
National Novel Writing Month (NaNoWriMo) faces backlash over its stance on AI usage in novel writing. The organization's decision to remain "AI neutral" has sparked debate among authors, participants, and industry professionals.
8 Sources
8 Sources
OpenAI has reversed a recent update to GPT-4o, the model powering ChatGPT, after users complained about the AI's overly agreeable and flattering responses. CEO Sam Altman acknowledged the issue and promised fixes to the model's personality.
35 Sources
35 Sources
Meta's vision to populate its social media platforms with AI-generated profiles has sparked debate about the future of social networking and user engagement.
22 Sources
22 Sources
Recent executive orders by former President Trump aim to remove 'ideological bias' from AI, potentially undermining safety measures and ethical guidelines in AI development.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved