2 Sources
2 Sources
[1]
Is The Washington Post's new AI podcast a hallmark of the future?
It's not your mother's podcast -- or your father's, or anyone else's. The Washington Post's new offering, "Your Personal Podcast," uses artificial intelligence to customize podcasts for its users, blending the algorithm you might find in a news feed with the convenience of portable audio. The podcast is "personalized automatically based on your reading history" of Post articles, the newspaper says on its help page. Listeners also have some control: At the click of a button, they can alter their podcast's topic mix -- or even swap its computer-generated "hosts." The AI podcast immediately made headlines -- and drew criticisms from people questioning its accuracy, and the motives behind it. Nicholas Quah, a critic and staff writer for Vulture and New York magazine who writes a newsletter about podcasts, says the AI podcast is an example of the Post's wide-ranging digital experiments -- but one that didn't go quite right. "This is one of many technologically, digitally oriented experiments that they're doing" that is aimed at "getting more audience, breaking into new demographics," he says. Those broader efforts range from a generative AI tool for readers to a digital publishing platform. But in this case, Quah adds, "it feels like it's compromising the core idea of what the news product is." On that help page, the newspaper stresses that the podcast is in its early beta phase and "is not a traditional editorial podcast." Bailey Kattleman, head of product and design at the Post, calls it "an AI-powered audio briefing experience" -- and one that will soon let listeners talk back to it. "In an upcoming release, they'll be able to actually interact and ask follow up questions to dig in deeper to what they've just heard," Kattleman says in an interview with NPR. As technically sophisticated as that sounds, there are many questions about the new podcast's accuracy -- even its ability to correctly pronounce the names of Post journalists it cites. Semafor reported that errors, cited by staffers at the Post, included "misattributing or inventing quotes and inserting commentary, such as interpreting a source's quotes" as the paper's own stance. In the newspaper's app, a note advises listeners to "verify information" by checking the podcast against its source material. In a statement, the Washington Post Guild -- which represents newsroom employees and other staff -- tells NPR, "We are concerned about this new product and its rollout," alleging that it undermines the Post's mission and its journalists' work. Citing the paper's standing practice of issuing a correction if a story contains an error, the guild added, "why would we support any technology that is held to a different, lower standard?" "The Post has certainly gone out on a ledge here among U.S. legacy publishers," Andrew Deck tells NPR. But he adds that the newspaper isn't the first to experiment with AI-generated podcasts in the wider news industry. Deck, who writes about journalism and AI for Harvard University's Nieman Lab, points to examples such as the BBC's My Club Daily, an AI-generated soccer podcast that lets users hear content related to their favorite club. In 2023, he adds, "a Swiss public broadcaster used voice clones of real radio hosts on the air." News outlets have also long offered an automated feature that converts text articles into computer-generated voices. Even outside of the news industry, AI tools for creating podcasts and other audio are more accessible than ever. Some promise to streamline the editing process, while others can synthesize documents or websites into what sounds like a podcast conversation. "It's cost-effective," says Gabriel Soto, senior director of research at Edison Research, which tracks the podcast industry. "You cut out many of the resources and people needed to produce a podcast (studios, writers, editors, and the host themselves)." And if a brand can create a successful AI virtual podcast in today's highly competitive podcasting market, Soto adds, it could become a valuable intellectual property in the future. Deck says that if the Post's experiment works, the newspaper "may be able to significantly scale up and expand its audio journalism offerings, without investing in the labor that would normally be required to expand." In an interview, Kattleman stresses the new product isn't meant to replace traditional podcasts: "We think they have a unique and enduring role, and that's not going away at the Post." For Deck, the level of customization it promises is an innovation. Being able to tailor a podcast specific to one person, he says, "is arguably beyond what any podcast team in journalism right now can produce manually." In an example the Post published, listeners can choose from voice options with names like "Charlie and Lucy" and "Bert and Ernie." Kattleman says her team was working from the idea that for an audience, there isn't a "one size fits all" when it comes to AI and journalism. "Some people want that really straight briefing style; some people prefer something more conversational and more voicey," she says. Quah says that adding an AI podcast is a bid to make stories accessible to a broader audience. He says that with the podcast, the Post seems to be trying to reach young people who "don't want to read anymore, they just want to listen to the news." A key goal, Kattleman says, is to make podcasts more flexible, to appeal to younger listeners who are on the go. Outlining the process behind the Post's AI podcast, Kattleman says, "Everything is based on Washington Post journalism." An LLM, or large language model, converts a story into a short audio script, she says. A second LLM then vets the script for accuracy. After the final script is stitched together, Kattleman adds, the voice narrates the episode. Soto, of Edison Research, says that 1 in 5 podcast consumers say they've listened to an AI-narrated podcast. But, he adds that for podcast listeners, "many prefer the human connection, accepting AI tools to assist in creating the content, but not in executing or hosting the podcast." The new AI podcast reminds Deck a bit of the hyper-personalized choices for users offered by TikTok and other social media. "There is a level of familiarity and, arguably, comfort with algorithmic curation among younger audiences," he says. But while younger audiences tend to be tech savvy, many of them are also thoughtful about authenticity and connection. "Community is at the core of why people listen to podcasts," Soto says. Then there's the idea of a host or creator's personality, which drives engagement on TikTok and other platforms. "These creators have built a relationship with their audience -- and maybe even trust -- even if they haven't spoken to sources themselves," Deck says. "This type of news content is a far cry from the disembodied banter of AI podcast hosts." One big potential consequence is the loss of jobs -- and for companies, the loss of talent. "The automation of it kind of erases the entire sort of voice performance industry," Quah says. "There are people who do this for a living," he adds, who could "produce higher quality versions of these recordings." There are also concerns that, if AI chooses a story and controls how it's presented, it might create an echo chamber, omitting context or skepticism that a journalist would likely provide. "AI-based news personalization tends to land firmly in the camp of delivering audiences what they want to know," Deck says. Deck says he's willing to give the Post's AI podcast a bit of time to see how it plays out. But Deck does have a chief concern: "I can say point blank, generative AI models hallucinate." And when AI models are wrong, he says, they're often confidently so. Blurring boundaries between human and AI voices could also raise questions of trust -- a critical factor for a news organization. As Soto puts it, "What happens when your audience expects content from the real you and ends up finding AI instead?"
[2]
The Washington Post's AI Generated Podcasts Are Already an Error-Laden Disaster
"It is truly astonishing that this was allowed to go forward at all." Earlier this week, the Washington Post announced that it would be launching "personalized" AI powered podcasts that would let users choose their own AI host to regale them on their choice of topics. And now for an entirely unsurprising update: the AI podcasts have turned out to be complete, error ridden disasters. Semafor reports that less than 48 hours after launching, the AI podcasts have sparked outrage among the WaPo's rank and file and editors alike, after they caught the AI-generated podcasts committing ghastly journalistic sins, like inventing quotes and misattributing information. "It is truly astonishing that this was allowed to go forward at all," one WaPo editor fumed on Slack. "Never would I have imagined that the Washington Post would deliberately warp its own journalism and then push these errors out to our audience at scale." "If we were serious we would pull this tool immediately," the editor added. The podcast's errors are exactly the kind you'd expect an AI model to make. Some are simple but noticeable cases of mispronunciation. But at times, according to Semafor, the AI podcast hosts would insert commentary, essentially editorializing by misconstruing a source's quote as the paper's position on the issue. In Slack messages obtained by Status, other staffers railed blasted the AI feature. "What are the guardrails to ensure accuracy in this podcast?" one asked. "It's a total disaster," another told Status. "I think the newsroom is embarrassed." The paper's head of standards Karen Pensiero wrote in an internal message to staff shared with Semafor that the situation was "frustrating for all of us." Readers have been noticing the errors, too. Jane Rosenzweig, a writer who covers tech, complained on Bluesky that WaPo's AI podcaster "announced they would be discussing 'whether or not people with intellectual disabilities should be executed' without mentioning any context until later." According to Semafor, there's a significant disconnect between the newsroom and the Post's product division. The podcast's product team sees the errors as a normal part part of rolling out a new and still experimental feature. The journalists, evidently, see it as an insult to their very profession. The podcasts are developed in collaboration with the AI voice cloning company Eleven Labs, and represent the latest way that the newspaper has incorporated AI tech under Jeff Bezos' ownership. The Post has already been using AI to provide summaries of its stories, and put forth a plan for letting non-professional writers submit articles written with AI. It also has a dedicated "Ask The Post AI" page for fielding questions to a chatbot trained on its articles.
Share
Share
Copy Link
The Washington Post launched an AI-powered personalized podcast that uses artificial intelligence to customize audio briefings based on reader history. Within 48 hours, the AI podcast sparked internal outrage after journalists discovered it was inventing quotes, misattributing information, and inserting commentary that misconstrued sources. The Washington Post Guild raised concerns about undermining journalistic standards while the product team defended it as an experimental feature.
The Washington Post has introduced "Your Personal Podcast," an AI-generated podcasts feature that personalizes news content based on individual reading history
1
. The AI podcast allows users to customize their audio briefing by selecting topics and even swapping computer-generated hosts with names like "Charlie and Lucy" or "Bert and Ernie"1
. Developed in collaboration with voice cloning company Eleven Labs, this AI-powered audio product represents the latest digital experiment under Jeff Bezos' ownership2
. Bailey Kattleman, head of product and design at the Post, describes it as "an AI-powered audio briefing experience" that will soon enable listeners to interact and ask follow-up questions1
.
Source: NPR
Less than 48 hours after launch, the AI podcast became an error-laden disaster that exposed serious flaws in automation and accuracy
2
. According to Semafor, journalists discovered the system was inventing quotes and misattributing information, committing what one editor called "ghastly journalistic sins"2
. The errors ranged from simple mispronunciation of Post journalists' names to more serious problems where the AI hosts inserted commentary, essentially editorializing by misconstruing a source's quote as the paper's position1
2
. Writer Jane Rosenzweig complained on Bluesky that the AI podcaster "announced they would be discussing 'whether or not people with intellectual disabilities should be executed' without mentioning any context until later"2
.The newsroom erupted in frustration as journalists viewed the launch as a breach of journalistic standards. "It is truly astonishing that this was allowed to go forward at all," one Washington Post editor fumed on Slack. "Never would I have imagined that the Washington Post would deliberately warp its own journalism and then push these errors out to our audience at scale"
2
. The Washington Post Guild, representing newsroom employees, expressed concerns about the product's rollout, stating it undermines the Post's mission and journalists' work1
. The guild questioned why the newspaper would support technology held to a "different, lower standard" than traditional journalism, which requires corrections for any errors1
. Karen Pensiero, the paper's head of standards, acknowledged the situation was "frustrating for all of us" in an internal message2
.Related Stories
A significant disconnect emerged between the product division and newsroom staff over how to evaluate the experimental feature. The podcast's product team views the errors as a normal part of rolling out a new feature still in its beta phase, while journalists see it as an insult to their profession
2
. The newspaper stresses on its help page that the podcast is in early beta phase and "is not a traditional editorial podcast," with a note advising listeners to "verify information" by checking against source material1
. Nicholas Quah, a podcast critic for Vulture and New York magazine, suggests this experiment "feels like it's compromising the core idea of what the news product is"1
.The Washington Post isn't alone in experimenting with AI-generated audio journalism. Andrew Deck from Harvard's Nieman Lab points to examples like the BBC's My Club Daily, an AI-generated soccer podcast, and a Swiss public broadcaster that used voice clones of real radio hosts in 2023
1
. Gabriel Soto from Edison Research notes the appeal: "It's cost-effective. You cut out many of the resources and people needed to produce a podcast (studios, writers, editors, and the host themselves)"1
. Deck suggests that if successful, the Washington Post "may be able to significantly scale up and expand its audio journalism offerings, without investing in the labor that would normally be required"1
. The level of customization offered by the algorithm is "arguably beyond what any podcast team in journalism right now can produce manually," though the current implementation raises serious questions about whether automation can maintain the accuracy and standards readers expect from established news organizations1
.Summarized by
Navi
13 Oct 2025•Technology

05 Jun 2025•Technology

03 Oct 2024•Technology

1
Technology

2
Technology

3
Technology
