3 Sources
[1]
Why Spotify has no button to filter out AI music
In mid-2025, frustration boiled over for Cedrik Sixtus. Finding his Spotify playlists increasingly sprinkled with tracks he suspected were AI generated, the Leipzig-based software developer built a tool to automatically label and block them from his listening. He uploaded his Spotify AI Blocker to a couple of code-sharing websites, where hundreds have downloaded it. It filters out a growing list of more than 4,700 suspected AI artists, drawing on already existing community tracking efforts, and signs like unusually high release volumes and AI-style cover art, supplemented with external detection tools. "It is about choice - if you want to hear AI music or if you don't," says Sixtus who would prefer Spotify labelled and enabled filtering of AI-generated content itself. Sixtus's tool is installed initially via the web browser version of Spotify. He warns that using his software "may violate Spotify's terms of service". He isn't alone: feelings run deep on the community forum of the world's most popular music streaming service. While for Sixtus the issue is that AI music doesn't sound right, others simply don't want to listen to music made by a bot. Spotify has made some concessions to address such concerns. In April it launched a test feature which shows, in a song's credits, how an artist used AI. But it's a voluntary system based on what an artist tells their record label or distributor. "We know this isn't a complete solution on its own. Building a truly comprehensive system is a challenge that requires industry-wide alignment," Spotify said in April. Spotify's position is certainly a long way from actively identifying AI-generated music and giving users an option to filter it out. "It is a difficult - borderline existential - balancing act for Spotify," says Robert Prey, who studies streaming platforms at Oxford University's Internet Institute. Spotify is trying to avoid value judgments about how music is created, but risks eroding trust among listeners, artists and the wider industry if it fails to offer enough transparency, he explains. "It has to figure out what listeners want and how artists feel - all while AI is improving, being used more widely and becoming harder to detect," he adds. The arrival of AI tools for music is both seducing and unsettling the music world. Generative AI music services like Suno and Udio now produce increasingly polished, fully realised songs, complete with lyrics, vocals and instrumentation from simple text prompts in seconds. In one recent controlled test, part of a Deezer-Ipsos poll, 97% of listeners failed to correctly distinguish between AI-generated and human-made tracks. And tens of thousands of the AI tracks appear to be uploaded to streaming platforms daily, where they could dilute revenue pools for human artists - even if most currently attract few listens. Spotify, along with YouTube Music and Amazon Music, have so far avoided any clear user-facing labels or filters for AI-generated music, neither openly using detection tools nor requiring systematic self-disclosure - though that may change as industry standards develop. Widely suspected AI acts like Sienna Rose, Breaking Rust and The Velvet Sundown are essemtially treated like any other artists by Spotify, even as the platform removes what it considers AI-related spam such as mass uploads and short tracks designed to game the system. "Our priority is addressing harmful uses [of AI] like spam and impersonation, rather than trying to filter music based on how it was made," a Spotify spokesperson said, adding AI in music also isn't a binary category but exists on a spectrum. Deezer - a smaller competitor to Spotify - has taken a stronger approach. Last year it began both tagging albums that contain AI generated tracks produced by Suno, Udio and similar, and excluding the tracks from algorithmic recommendations or human-made playlists. It uses its own in-house detection technology based on training AI models to spot statistical patterns in the sound itself, and recently began offering it for sale across the industry. "We're the only music streaming platform that has that in place," notes Jesper Wendel, its head of global communications. In March, Apple Music said it was introducing "transparency tags" and would eventually require that music labels and distributors self-disclose when new songs or related content involve AI. But, as with Spotify's song credit features, critics point out those are unlikely to be reliable as artists may rather not disclose AI use for fear of stigma - and how visible Apple's tags will be to listeners remains unclear. That AI music exists on a continuum does make labelling difficult, says Maya Ackerman, an expert in AI and computational creativity at Santa Clara University in California and co-founder and CEO of WaveAI, which has an AI tool to help musicians write song lyrics. While some tools are "prompt in, song out" - where AI labels would be straightforward - others are designed for co-creation, assisting with specific parts of the music-making process. If a musician uses those tools, at what point does that warrant a label? And, Ackerman adds, even with tools like Suno and Udio, users can put a lot of their creative selves into the outputs - feeding in their own lyrics or spending many hours iterating on the song's sound. "From a distance it looks like such an obvious 'yes, label AI music' but, once you zoom in, you realise it is a very complicated thing," she says. There is also the technical challenge of accurately detecting AI-generated tracks, with potentially serious consequences if human musicians are falsely labelled as AI. Even detecting fully AI generated music can be fraught notes Bob Sturm, who studies AI's disruption of music at the KTH Royal Institute of Technology in Sweden. AI detection systems are trained on outputs from existing AI music generation tools, but as those tools improve the software must be continually retrained, leading to what he characterises as a kind of "AI music arms race". It is a challenge, acknowledges Manuel Moussallum, Deezer's head of research, but the company's detection technology has so far maintained a low false positive rate, he says, and research to better understand hybrid cases, where AI is only partially used, is ongoing. Yet others see such concerns as a distraction. "There is a lobbying message to say 'we can't draw the line, and therefore we shouldn't do anything'," says David Hoffman, a professor at Duke University in North Carolina who studies the impact of AI-generated music on artists' livelihoods. He argues platforms should at least label fully AI-generated tracks and assess the scale of the remaining issue from there. And listeners appear to want labels: in the Deezer-Ipsos poll, around 80% of respondents said AI-generated music should be clearly labelled, though views on filtering were more divided. "Listeners deserve awareness," says singer-songwriter Tift Merritt, who works with Hoffman as a practitioner-in-residence at Duke, citing the way we provide nutritional labels on food or tell consumers if it is organic. What may really be stopping Spotify from embracing labelling and filtering is economics, speculate many. Spotify is trying to optimise for platform growth, says Prey from Oxford. Keeping recommendation systems as "unencumbered and free to operate as possible" helps with that. Detecting AI-generated content would add cost, Hoffman notes, and it may also be cheaper to serve up AI music. Past controversies fuel suspicion note critics. Spotify has, at various points, been accused of commissioning and promoting lower-cost music for background-style playlists - claims it denies. "All tracks on our platform are delivered by third-party rightsholders like labels and distributors, and the payment model is the same for all of them: royalties are paid out of the revenue pool based on listening share," a Spotify spokesperson said. Meanwhile the area is evolving. The music industry's standards body, DDEX, is continuing to work on a broad industry standard for AI disclosures in music credits, though display will depend on the streaming platforms. And certain AI-generated content is required to be labelled from August 2026 under the EU AI Act; though how Spotify will implement those rules remains unclear. It feels like the "Wild West" for AI music right now says David Hesmondhalgh, professor of media, music and culture at the University of Leeds. But he also expects "some kind of order will emerge", as the early-2000s file-sharing panic ultimately led to today's streaming industry. And Spotify appears to be recognising the pressure, recently announcing features aimed at elevating human artistry, including SongDNA and "About the Song" which give premium users deeper insight into a track's origins and contributors. "We believe the right response to AI in music isn't any single policy, it's a combination of proactive controls, industry-wide standards, and a deeper investment in the human creativity behind every track," added the Spotify spokesperson.
[2]
'Every label in the world is delivering AI': Apple Music executive says over a third of uploads are '100% AI' as it clamps down on AI fraud
* Oliver Schusser from Apple Music says a third of uploads are AI-generated * Despite this, only 0.5% of all users are engaging with this content * Apple Music has plans to combat the AI epidemic even further Apple Music has become the latest music streaming service to be hit with the influx of AI-generated content, says its VP Oliver Schusser -- but it's reaching only a very small percentage of all users. Speaking with Billboard ($/£), Schusser shed light on the state of AI music in Apple Music's library, sharing that "more than a third of what (Apple Music) get(s) today is actually what we would say is 100% AI". It goes to show that it's becoming easier for labels and distributors to submit music that's completely made using AI, and Apple Music isn't the only service that's facing this epidemic. Just last week, Deezer declared that nearly half of the new music submitted to the platform is AI-generated, resulting in the company's decision to stop offering hi-res versions of these songs. So, how is Apple planning to put out the AI fire? Well, Schusser went into further detail in his interview. "We've never talked about this -- but we've developed technology in-house that would allow us to exactly see what music people are delivering us, what AI (model) it is and all that," he reveals, likely referring to Transparency Tags. Back in March, Apple sent out a letter to industry partners revealing its plans to roll out 'Transparency Tags', a new metadata system to help flag AI-generated and AI-assisted music. This means labels and distributors can disclose whether AI has been used in a song's production when submitting to Apple Music. Though it's optional, Schusser made it clear that he "really need(s) the content providers and the labels to take responsibility". There's no denying that fully AI-generated music is cropping up in the best music streaming services, but Schusser unveiled an interesting statistic that may come as a surprise: despite the rise, it's not having a huge impact on users' listening and engagement habits. "The reality is, the usage of the AI music on Apple Music is really tiny. I'm rounding, but it's below 0.5% of usage. We're just at the beginning here," he told Billboard -- but fraud is still rife. This is another issue on which Apple Music is clamping down, but it's been doing this since the good old iTunes days: "This has been a 20-year journey because there was fraud, obviously, in iTunes already," Schusser said, which led to the introduction of Apple's fraud penalty. The company also doubled this penalty as of this year. But the battle isn't over, as Schusser puts it, "We invest way more than anyone else in reducing and eliminating fraud. We implemented a fraud penalty four years ago, where if we catch someone, then we actually take the money and put it back in the pool. We need to monitor AI music because there's a correlation between AI and fraud". He also shared that Apple has seen a "60% reduction" in fraudulent uploads after implementing the penalty. As it stands, I've been one of the lucky ones not to have run into AI-generated music flooding my recommendations in Apple Music as well as Spotify, though the latter has come under significant scrutiny for housing AI slop. Like other platforms, Spotify is also working toward safeguarding users by removing 25 million AI tracks in the last 12 months, and devising a solid AI combat strategy for the future. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds.
[3]
Spotify apparently has no solid plan to label AI-generated music
There's a quiet anxiety spreading through music streaming -- and Spotify, the platform more than half a billion people trust to soundtrack their lives, is doing remarkably little about it. AI-generated tracks are flooding streaming platforms at a pace that would've felt dystopian five years ago. Tens of thousands of them, every single day, slipping into the same playlists and recommendation queues as your favorite human artists. And most listeners wouldn't even know the difference -- research suggests the overwhelming majority can't tell them apart in a blind listen. Listeners are already solving it themselves So when people started noticing something felt off, they started doing something about it themselves. One developer in Germany got so fed up with suspected AI tracks bleeding into his Spotify playlists that he built his own tool to flag and block them. He uploaded it online. Hundreds of people downloaded it immediately. That alone should tell Spotify something. But Spotify's response so far has been more of a corporate shrug than a genuine reckoning. The platform recently rolled out a feature that shows AI usage in a song's credits -- but only if the artist actually admits to it. Voluntary self-disclosure from people who might fear career damage for doing so. That's not transparency; that's just the appearance of it. Recommended Videos On the other hand, Deezer, far smaller and more powerful than Spotify, has already deployed its own detection technology and started tagging and filtering AI-generated content from its recommendations. Apple Music is at least moving toward mandatory disclosure. Spotify, the biggest platform in the room, is still standing at the doorway, saying it's complicated. Yes, it's complicated but that's not an excuse The line between AI-assisted and AI-generated is definitely blurry. A musician who uses AI to help write a verse is a different conversation from someone who typed a prompt and uploaded the result. Experts in the field acknowledge this isn't a clean binary. Mislabeling a human artist as AI would be a serious mistake with real consequences. But here's the thing -- nobody is asking for perfection. What listeners want, what artists deserve, is a starting point. Label the fully AI-generated stuff, assess the scale of the grey area from there. The argument that it's too hard to do anything, so we shouldn't do anything, is starting to sound more like a convenient excuse. Because there's money in this somewhere. AI-generated music is cheap to produce, potentially cheaper to serve, and doesn't require royalties the way human artists do. The incentive structures here aren't invisible. When the world's biggest music platform declines to ask too many questions about where its content comes from, it's worth wondering why. A trust problem in the making There's a version of this story where Spotify eventually gets it right -- where transparency tools, industry standards, and platform accountability catch up with the technology. That future might even be nearer than it seems, with regulatory pressure building and the music industry's standards bodies inching toward disclosure frameworks. But right now, in the present, listeners are downloading third-party blockers and double-checking their playlists, as if they're reading the fine print on a suspicious contract. That's not the relationship a platform should want with its audience. Spotify has built its entire brand on helping people discover music they love. If people stop trusting what they're hearing, that brand means very little.
Share
Copy Link
AI-generated music now comprises over a third of Apple Music uploads, with tens of thousands of AI tracks flooding streaming platforms daily. While Spotify relies on voluntary disclosure, frustrated users build their own blocking tools. Deezer deploys detection technology as the music industry grapples with transparency and listener trust.
AI-generated music has reached a critical threshold across music streaming services, with Apple Music VP Oliver Schusser revealing that "more than a third" of current uploads are what the platform considers "100% AI"
2
. Deezer reported similar figures last week, stating nearly half of new music submitted is AI-generated. Tens of thousands of AI tracks appear to be uploaded to streaming platforms daily, slipping into playlists and recommendation systems alongside human artists1
. Generative AI music services like Suno and Udio now produce polished songs complete with lyrics, vocals and instrumentation from simple text prompts in seconds. In a recent Deezer-Ipsos controlled test, 97% of listeners failed to correctly distinguish between AI-generated and human-made tracks1
.Spotify, the world's most popular music streaming service, has avoided implementing clear user-facing labels or filters for labeling AI music. In April, the platform launched a test feature showing how artists used AI in song credits, but it relies entirely on voluntary disclosure from artists through their record labels or digital distributors
1
. "We know this isn't a complete solution on its own. Building a truly comprehensive system is a challenge that requires industry-wide alignment," Spotify stated. The platform's position prioritizes "addressing harmful uses like spam and impersonation, rather than trying to filter music based on how it was made"1
. This approach has left frustrated listeners taking matters into their own hands. Leipzig-based software developer Cedrik Sixtus built a Spotify AI Blocker that filters out a growing list of more than 4,700 suspected AI artists, drawing on community tracking efforts and signs like unusually high release volumes and AI-style cover art1
. Hundreds have downloaded the tool from code-sharing websites. "It is about choice - if you want to hear AI music or if you don't," says Sixtus, who warns the software may violate Spotify's terms of service1
.Apple Music has taken a more proactive stance on AI music detection and transparency. Schusser revealed the company has "developed technology in-house that would allow us to exactly see what music people are delivering us, what AI (model) it is and all that"
2
.
Source: TechRadar
In March, Apple introduced Transparency Tags, a metadata system enabling music labels to disclose whether AI has been used in song production when submitting to the platform
2
. While currently optional, Schusser emphasized he "really need(s) the content providers and the labels to take responsibility." Despite the volume of AI uploads, actual usage remains minimal—below 0.5% of total listening on Apple Music2
. Apple has also intensified efforts against AI content fraud, doubling its fraud penalty this year and achieving a 60% reduction in fraudulent uploads2
.Related Stories
Deezer has implemented the most aggressive approach among music streaming services. The platform began tagging albums containing AI-generated tracks produced by Suno, Udio and similar services, while excluding these tracks from algorithmic recommendations and human-made playlists
1
. The company uses proprietary detection technology based on training AI models to spot statistical patterns in sound itself, and recently began offering this technology for sale across the industry. "We're the only music streaming platform that has that in place," notes Jesper Wendel, Deezer's head of global communications1
.The influx of AI-generated content raises fundamental questions about listener trust and royalties for human artists. "It is a difficult - borderline existential - balancing act for Spotify," says Robert Prey, who studies streaming platforms at Oxford University's Internet Institute
1
. Spotify risks eroding trust among listeners, artists and the wider industry if it fails to offer sufficient transparency while trying to avoid value judgments about how music is created. AI-generated music could dilute revenue pools for human artists, even though most AI tracks currently attract few listens1
. The incentive structures merit scrutiny—AI music is cheap to produce, potentially cheaper to serve, and doesn't require royalties the way human artists do3
. As industry standards develop and regulatory pressure builds, platforms face mounting pressure to establish user choice mechanisms and clearer disclosure frameworks that balance innovation with artist protection.Summarized by
Navi
[3]
06 Jan 2026•Technology

25 Sept 2025•Technology

04 Mar 2026•Technology
1
Technology

2
Policy and Regulation

3
Science and Research
