20 Sources
20 Sources
[1]
Spotify to label AI music, filter spam and more in AI policy change
Spotify on Thursday announced a series of updates to its AI policy, designed to better indicate when AI is being used to make music, to cut down on spam, and to make it clearer that unauthorized voice clones are not permitted on its service. The company says it will adopt an upcoming industry standard for identifying and labeling AI music in credits, known as DDEX, and will soon roll out a new music spam filter to catch more bad actors. Under the DDEX system, labels, distributors, and music partners submit standardized AI disclosures in music credits. This solution offers detailed information about the use of AI -- like whether it was used for AI-generated vocals, instrumentation, or post-production, for example. "We know the use of AI is going to be a spectrum, with artists and producers incorporating AI in various parts of their creative workflow," said Sam Duboff, Spotify's Global Head of Marketing and Policy, in a press briefing on Wednesday. "This industry standard will allow for more accurate, nuanced disclosures. It won't force tracks into a false binary where a song either has to be categorically AI or not AI at all," he noted. As part of the same announcement, Spotify clarified its polices around AI-enabled personalization, stating directly thatunauthorized AI voice clones, deepfakes, and any other form of vocal replicas or impersonation are not allowed and will be removed from the platform. While the DDEX standard is developing, Spotify says it's received commitments from 15 labels and distributors who plan to adopt the technology, and sees its move as one that could signal to others it's time to adopt the technology. Because AI tools make it easier for anyone to release music, Spotify also has a new plan to cut down on the potential spam that results. This fall, the company will roll out a new music spam filter that will attempt to address spam tactics, tag them, and then stop recommending those tracks to users. "We know AI has made it easier than ever for bad actors to mass upload content, create duplicates, use SEO tricks to manipulate search or recommendation systems...we've been fighting these kinds of tactics for years," Duboff said. "But AI is accelerating these issues with more sophistication, and we know that requires new types of mitigations." The company said it would roll out the filter gradually to make sure it's targeting the right signals, then add more signals over time as the market evolves. Related to this, Spotify will also work with distributors to address something called "profile mismatches," a scheme where someone fraudulently uploads music to another artist's profile across streaming services. The company said it hopes to prevent more of these before the music ever goes live. Despite the changes, Spotify executives emphasized that they still support use of AI provided it's used in a non-fraudulent way. "We're not here to punish artists for using AI authentically and responsibly. We hope that artists' use of AI production tools will enable them to be more creative than ever," noted Spotify VP and Global Head of Music, Charlie Hellman. "But we are here to stop the bad actors who are gaming the system, and we can only benefit from all that good side of AI if we aggressively protect against the downside," he said. Spotify's updates follow a rapid increase in AI-generated music across the industry. This summer, an AI-generated band called Velvet Sundown went viral on its service, leading users to complain that the company isn't transparent about labeling its AI tracks. Meanwhile, streaming rival Deezer recently shared that about 18% of the music uploaded each day to its service -- or more than 20,000 tracks -- is now fully AI-generated. Spotify wouldn't share its own metrics on the matter directly -- but Duboff told reporters that "the reality is, all streaming services have almost exactly the same catalog." "People tend to deliver the music to all services," he explained, adding that uploading tracks doesn't mean anyone's listening or that the AI music makes money. "We know AI usage is increasingly not a binary, but kind of a spectrum of how artists and producers are using it."
[2]
Spotify's New AI Policies Aim to Crack Down on Deepfakes and Misleading Content
Though Kourtnee hasn't won any journalism awards yet, she's been a Netflix streaming subscriber since 2012 and knows the magic of its hidden codes. Spotify is addressing the use of generative AI in music streaming with the rollout of new policies aimed at protecting artists and listeners alike. On Thursday, the company acknowledged AI's pros and cons while announcing plans for a spam filter, disclosure notices and a way to tackle impersonation. "At its best, AI is unlocking incredible new ways for artists to create music and for listeners to discover it," Spotify said on its website. "At its worst, AI can be used by bad actors and content farms to confuse or deceive listeners, push 'slop" into the ecosystem, and interfere with authentic artists working to build their careers." To help tackle what amounts to digital identity theft, Spotify is implementing an impersonation policy that allows artists to file a claim against anyone who has cloned their voices and uploaded content without permission. Under the policy, the streamer will remove the uploaded content, whether AI was used to create it or a different method. If an artist discovers a song or body of work on the platform using their voice or likeness, they -- or someone on their behalf -- can submit a complaint. This fall, Spotify is also launching a music spam filter to fight against sketchy music uploads involving things like bulk uploads, copies, and abuse of the royalty system. The company said the filter will be "a system that will identify uploaders and tracks engaging in these tactics, tag them, and stop recommending them." To prevent punishing legit uploads, Spotify will cautiously roll it out and monitor it along the way. And what's set to be a major guideline is its new AI disclosures. Spotify is working with DDEX and multiple media companies to show standardized AI information disclosures in music credits on the app. As a listener, you will see song info that tells you whether AI tools were used for "AI-generated vocals, instrumentation, or post-production." Though Spotify has embraced AI through features like its AI DJ and playlist creation, the move comes after several incidents of artists being impersonated on music apps. This past July, the streaming service removed a fake AI-generated song that was uploaded to Blaze Foley's profile, a country artist who died in 1989.
[3]
Spotify cracks down on AI 'slop' - these are the changes you'll see
Follow ZDNET: Add us as a preferred source on Google. ZDNET's key takeaways * Spotify is introducing new policies around AI music. * The new policy pushes back against impersonation and spam. * The streaming service will now clearly label AI-generated music. One of the most popular music streaming platforms is taking steps to help protect both artists and users against the misuse of AI. In an announcement today, Spotify said it was making several policy changes related to AI-generated content on its platform. These changes are designed not only to help people know when songs are made by AI (or when AI was used in the process at all) but also to fight against the misuse of AI in music. Also: Spotify's free tier just got a major upgrade - here's what's new Spotify notes the fight against spam isn't necessarily new, adding that it's been fighting junk tracks for over a decade. AI though, has escalated things significantly. In the past year alone, Spotify says, it says it has removed more than 75 million "spammy" tracks from its service. What's changing on the streaming platform Here's a look at the changes you'll see in Spotify: A fight against impersonation: First, Spotify says it will remove music that impersonates another artist's voice without that artist's permission -- whether the content is made with AI or not. Spotify says this can mean an uploader that is pretending to be the original artist or someone presenting themselves as an "AI version" of an existing artist. This includes content that doesn't name an artist in the metadata or credits but contains vocals "clearly recognizable as the exact voice of another artist" without that artist's permission. A new spam filtering system: Since Spotify offers payouts to artists based on how often users play a song, scammers are trying to take advantage. The company explained that spam tactics like "mass uploads, duplicates, SEO hacks, artificially short track abuse, and other forms of slop" are easier to produce than ever with AI. Not only does this dilute the royalty pool for real artists, but it also reduces attention for those artists. Also: Spotify's long-awaited lossless music is finally here - how to enable it Spotify says that's why it's introducing a new music spam filter system that automatically identifies uploaders and tracks engaging in these tactics, tags those tracks, and stops recommending them. The music streamer says this system is rolling out "conservatively over the coming months" to avoid penalizing the wrong uploaders. A labeling system for AI-generated content: Spotify acknowledges that many listeners want to know about the songs they're listening to, but there's no solid way to know for sure if AI was used in the creation process. That's why it's helping develop an industry standard for AI disclosures in music credits called DDEX. Artists will now have to clearly indicate how AI played a role in the creation of a track, including vocals, instrumentation, and post-production. Spotify explains that this "is not about punishing artists who use AI responsibly or down-ranking tracks for disclosing information about how they were made," but for strengthening trust.
[4]
Spotify is doing more to address AI 'slop' on its platform
Spotify a set of policy changes surrounding AI-generated music and spam on its streaming platform. The company is helping to develop an industry standard for AI disclosure in music credits, alongside . It will be strengthening its approach to AI-assisted spam, such as unauthorized vocal clones, as well as uploaded music that fraudulently delivers music to another artist's profile. The new disclosures will encourage artists to share what aspect, if any, of their production was created with the assistance of AI. Instead of a song simply being marked as "is AI" or "no AI," artists can specify whether they used AI-generated vocals, instrumentation or post-production. The streamer will also debut a new , making it clearer how the platform deals with AI voice clones. The policy promises to give artists stronger protections against this sort of spam, and clearer recourse should any appear. "...the pace of recent advances in generative AI technology has felt quick and at times unsettling, especially for creatives. At its best, AI is unlocking incredible new ways for artists to create music and for listeners to discover it. At its worst, AI can be used by bad actors and content farms to confuse or deceive listeners, push 'slop' into the ecosystem, and interfere with authentic artists working to build their careers," Spotify said in its announcement. These aren't the only tactics that bad actors use to divert royalties and deceive listeners. Spotify shared that other types of spam "such as mass uploads, duplicates, SEO hacks, artificially short track abuse, and other forms of slop" have become easier to create and deploy as AI tools substantially lower the barrier of entry to creating this type of content. To address these, the streamer is launching a new spam filter this fall that will identify uploads and tracks that engage in these types of spam, tag them on the platform, and stop recommending them to users. Spotify said that over the past 12 months it has already removed more than 75 million "spammy" tracks. Spotify says that this sort of spam can dilute the royalty pool and take attention away from real artists trying to earn a living, even in part on the platform. The company says its goal is to achieve more transparency for listeners and protect artist identity through these new policies. These new policies don't address AI-generated projects like , which remains on the platform despite all its lyrics, vocals, and imagery being entirely AI-generated. Spotify doesn't directly acknowledge the AI band, but says "we support artists' freedom to use AI creatively while actively combating its misuse by content farms and bad actors."
[5]
Spotify's new filter aims to keep AI slop out of your playlists
Today, Spotify announced it is rolling out new policies and tools to fight off spammy AI content. This initiative is focusing on three areas of concern: AI slop, impersonation, and disclosure of AI use. According to the company, the goal is to provide "listeners with greater transparency about the music they hear" and to protect artists from spam, impersonation, and deception. In terms of dealing with AI slop, Spotify will be rolling out a new music spam filter. The filter will identify and tag uploaders who try to abuse the system -- like using AI to mass upload songs, using SEO hacks, or creating artificially short tracks -- and they'll stop being recommended. The rollout of this filter is scheduled for this fall, but Spotify adds that the tool will be released "conservatively over the coming months and continue to add new signals to the system as new schemes emerge."
[6]
Spotify nukes 75+ million spammy tracks, adds filter to prevent more
Spotify just dropped a bombshell: it's purged more than 75 million spammy tracks from its platform over the past year. That's a massive cleanup operation, and a pretty clear sign that the streaming giant is stepping up its war against AI-generated junk clogging up playlists. Making the top streaming service even better One small step at a time In a blog post, Spotify framed the move as part of its broader effort to protect authentic artists from the wave of generative AI spam and impersonation plaguing the music industry. While the company has been battling fake uploads for years, the recent boom in generative AI music tools -- not to be confused with Spotify's own useful AI tools -- has supercharged the problem, letting bad actors pump out low-effort tracks at scale and even impersonate popular artists to siphon off royalty payouts. To counter that, Spotify is rolling out a spam filter this fall that will automatically flag mass uploads, duplicate tracks, SEO-stuffed song titles, and other royalty-gaming schemes. The system will start conservatively (presumably to avoid nuking legitimate indie artists), but will evolve as scammers come up with new tricks. Spotify also announced tougher impersonation rules, explicitly banning unauthorized AI voice clones of artists and promising faster resolution for cases where tracks are incorrectly mapped to an artist's profile. The goal is to keep artists in control of their voices being used in AI-generated music. Finally, the company is working with industry partners on a new AI disclosure standard that will let artists voluntarily credit how they used AI in a track's creation, without penalizing them in search or recommendations. This isn't just about cleaning house, it's about trust. With payouts hitting $10 billion annually, Spotify can't afford to let spammers erode listener experience or eat into royalty pools. The platform seems intent on getting ahead of the AI wave before it swamps the ecosystem. And if 75 million tracks are already gone, it's clear Spotify is willing to swing a very big hammer.
[7]
The new battle against 'AI Slop' -- Spotify will now label synthetic music and punish abusers
Spotify, one of the leading music streaming services, is taking aim at the rise of AI in the streaming industry. Announcing a host of new rules and regulations around artificial intelligence in a new blog post, Spotify highlighted the 'unsettling' pace at which AI had advanced. Of course, as a company that has used AI rather liberally, introducing the AI DJ, and utilising AI algorithms in much of its service, Spotify was equally quick to point towards the need for balance. "At its best, AI is unlocking incredible new ways for artists to create music and for listeners to discover it. At its worst, AI can be used by bad actors and content farms to confuse or deceive listeners, push "slop" into the ecosystem, and interfere with authentic artists working to build their careers," the blog post explained. "That kind of harmful AI content degrades the user experience for listeners and often attempts to divert royalties to bad actors. The future of the music industry is being written, and we believe that aggressively protecting against the worst parts of Gen AI is essential to enabling its potential for artists and producers." While Spotify claims to have already removed over 75 million 'spammy' tracks from the service in the last year, its main aim is to limit the increase of AI slop further. This will occur through three new AI rulings. The biggest and most important update that has come from this new set of rules, Spotify is announcing the use of DDEX. This is a system for identifying and labelling AI music in credits. With this system, labels, distributors and music partners will submit explanations of how AI was used in music. This means it can be made clear if AI was used to generate vocals or instrumentation, or used in post-production or any other area of the music. This information will be displayed in the app when looking at music. Spotify has partnered with a range of industry partners on the introduction of this feature. One of Spotify's new rules coming into place is its impersonation policy. This clarifies how the company handles claims around AI voice clones. This, in theory, would give artists stronger protections and a clearer method for recourse. Spotify goes on to claim it is ramping up its investment to protect against uploaders fraudulently delivering music to another artist's profile across streaming platforms. Spotify highlighted the rise in people using Spotify to make money through unethical means. The plan is to combat this via a music spam filter, which will identify and track uploaders misusing the service. This includes mass uploads, duplicates, SEO hacks, artificially short track abuse and other methods of boosting their profile. Spotify states in the blog this feature will be rolled out slowly and conservatively, attempting to avoid bans on the wrong accounts. While Spotify is putting in limits on the use of AI, it is by no means turning its back on AI. The company has been an active user of the technology for a long time now and has supported the use of AI by artists. "While AI is changing how some music is made, our priorities are constant. We're investing in tools to protect artist identity, enhance the platform, and provide listeners with more transparency," the blog states. "We support artists' freedom to use AI creatively while actively combating its misuse by content farms and bad actors. Spotify does not create or own music; this is a platform for licensed music where royalties are paid based on listener engagement, and all music is treated equally, regardless of the tools used to make it." What do you think about Spotify's use of AI? Let us know in the comments box below.
[8]
Spotify is cracking down on AI voice clones with new impersonation rules for music uploads
The platform now requires artist consent for any AI-impersonated vocals Spotify is tightening the mic cord on deceptive musical impersonators and manipulative sound spam with a set of new policies that take direct aim at the now-endemic plague of AI-generated audio submitted under false pretenses. Now, if you want to upload a song that uses an AI-generated version of a real artist's voice, you had better have their permission. No more deepfake Drake tracks, cloned Ariana choruses, or other "unauthorized vocal replicas" allowed to sneak into playlists, including those from artists who passed away decades ago. Spotify's fight against music claiming false artistic origins is one of a few fronts in a larger battle against so-called "AI slop." Alongside the anti-impersonation push, Spotify is introducing a new AI-aware spam filtering system along with a way for artists to disclose when and how AI was used in the creation of their music for legitimate purposes. While Spotify has long maintained a policy against "deceptive content," convincing AI voice clones have forced a redefinition. Under the new rules, using someone's voice without their explicit authorization is a violation. That makes removing offending content easier while laying out clearer boundaries for those experimenting with AI in a non-malicious way. The same goes for tracks that, AI-generated or not, get fraudulently uploaded to an artist's official profile without their knowledge. The company is now testing new safeguards with distributors to prevent these hijackings and is improving its "content mismatch" system so artists can report issues even before a song goes live. As AI music tools become ubiquitous, their creative potential has unfortunately included opportunities for scams and lies, along with a flood of low-effort tracks designed solely to exploit the Spotify algorithm and collect royalties. According to Spotify, more than 75 million spammy tracks were removed from its platform in the past 12 months alone. The new filter could help remove all of those thousands of slightly remixed trap beats uploaded by bots, or 31-second ambient noise loops uploaded in bulk.. The system will begin tagging bad actors and down-ranking or delisting those tracks. Spotify says it will roll this out cautiously to avoid punishing innocent creators. Not that Spotify is totally against AI being used to produce music. But the company made it clear it wants to make the use of AI transparent and specific. Instead of simply stamping tracks with an AI label, Spotify will begin integrating more nuanced credit information based on a new industry-wide metadata standard. Artists will be able to indicate if vocals were AI-generated, but instrumentation was not, or vice versa. Eventually, the data will be displayed inside the Spotify app, so listeners can understand how much AI was involved in what they're hearing. That kind of transparency may prove essential as AI becomes more common in the creative process. The reality is, many artists are using AI behind the scenes, whether for vocal enhancement, sample generation, or quick idea sketching. But until now, there's been no real way to tell. For listeners, these changes could mean more confidence that what you're hearing is coming from where you thought. With AI musicians becoming more popular and scoring big record deals, these sorts of policy moves will be necessary across any streaming service. Still, enforcement will be the real test. Policies are only as effective as the systems behind them. If impersonation claims take weeks to resolve, or if the spam filter catches more hobbyists than hustlers, creators will quickly lose faith. Spotify is large enough to potentially set a good standard for dealing with AI music cons, but it will need to be adaptable to how the scam artists respond in this AI battle of the bands.
[9]
Spotify removes 75m spam tracks in past year as AI increases ability to make fake music
Streamer to crack down on AI-generated spam by introducing filter to identify fraudulent uploads Spotify has revealed it removed 75m spam tracks from its platform over the past year as artificial intelligence tools increase the ability of fraudsters to create fake music. The world's biggest music streaming service announced a crackdown on vexatious tracks after admitting the rise of powerful AI tools had coincided with a significant amount of spam content being tackled by the streamer. AI-generated spam is becoming a problem for streaming platforms and musical artists because every play more than 30 seconds long generates a royalty for the scammer behind it - and denies payment to a legitimate artist. The 75m spam tracks rival the scale of Spotify's actual catalogue, which stands at 100m tracks. Spotify also offers nearly 7m podcasts and 350,000 audiobooks. The company said the spam tracks were identified either before they were uploaded as part of an existing filtering process for new tracks, or taken down after getting on to the platform and being identified as illicit. Spotify said it would start rolling out a music spam filter to identify uploaders, tag them and stop the tracks from being recommended by its algorithm. The company said AI tools had made it easier to generate spam content such as impersonations, ultra-short tracks and mass uploads of artificial music, which range from meditation instrumentals to duplicates of famous artists. "Spam tactics ... have become easier to exploit as AI tools make it easier for anyone to generate large volumes of music," Spotify said. The Stockholm-based company, which has nearly 700 million users, said despite the uptick in harmful uses of AI-made content, it was not having a serious impact on listening habits or payments to artists. Spotify paid $10bn (£7.4bn) in royalties last year, although the level of royalty payments is often a subject of tension between the platform and artists. "Engagement with AI-generated music on our platform is minimal and isn't impacting streams or revenue distribution for human artists in any meaningful way," Spotify said. In 2023 Spotify introduced a rule that individual tracks have to be streamed more than 1,000 times before generating a payment, a change the company says has helped tackle scammers. Spotify is also strengthening rules on vocal deepfakes, which are allowed only when the artist being impersonated has given their permission. It is also cracking down on scammers uploading deepfake tracks to a popular artist's profile page. One of the most notorious musical deepfakes was published in 2023. Heart on My Sleeve, a song featuring AI-made vocals purporting to be Drake and the Weeknd, was pulled from streaming services after Universal Music Group, the record company for both artists, criticised the song for "infringing content created with generative AI". Spotify said it would support a new industry standard for disclosing the use of AI in creating a track, developed by a tech and music-industry backed non-profit called DDEX. Artists' use of the standard on the platform will be voluntary, said Spotify, and they will not be forced to label music as entirely or partly AI created. "This change is about strengthening trust across the platform," said the company. "It's not about punishing artists who use AI responsibly or down ranking tracks for disclosing information about how they were made." The popularity on Spotify of Velvet Sundown, an AI-generated band, had given rise to calls for mandatory tagging of music created by the technology. Spotify has not taken down Velvet Sundown - which says it is a "synthetic music project" on its Spotify profile page - because it does not violate the company's anti-spam policies.
[10]
Spotify to introduce AI label and spam filter to stop AI music slop
When AI slop started making the rounds on Spotify -- bands like The Velvet Sundown for instance -- users urged Spotify to do something about it. They wanted a label showing that the music on their Discover Weekly and recommendations was actually created by AI. Some users even went so far as to say they should "boycott Spotify" until a label was made. On Thursday, Spotify said it would start doing just that, saying in a press release that "aggressively protecting against the worst parts of Gen AI is essential to enabling its potential for artists and producers." The platform is integrating a new spam filtering system, AI disclosures, and "improved enforcement of impersonation violations" like deepfakes. Spotify worked with DDEX, or the Digital Data Exchange, which is a standards-setting organization in the music industry, to require a "new industry standard for AI disclosures in music credits." This is because, as Spotify says, many artists responsibly use AI tools while creating music, so adding a simple "AI" or "Not AI" label doesn't actually solve the issue of listeners wanting to know if they're listening to AI music. "This standard gives artists and rights holders a way to clearly indicate where and how AI played a role in the creation of a track -- whether that's AI-generated vocals, instrumentation, or post-production," Spotify wrote in its press release. "This change is about strengthening trust across the platform. It's not about punishing artists who use AI responsibly or down-ranking tracks for disclosing information about how they were made." "At its best, AI is unlocking incredible new ways for artists to create music and for listeners to discover it. At its worst, AI can be used by bad actors and content farms to confuse or deceive listeners, push 'slop' into the ecosystem, and interfere with authentic artists working to build their careers," Spotify's press release read. "That kind of harmful AI content degrades the user experience for listeners and often attempts to divert royalties to bad actors." The new impersonation policy Spotify released specifically details how it plans to give artists stronger protections against AI voice clones. Spotify plans to attack spam music -- like "mass uploads, duplicates, SEO hacks, artificially short track abuse, and other forms of slop" -- by rolling out a new system that "will identify uploaders and tracks engaging in these tactics, tag them, and stop recommending them." They're going to start conservatively so they don't accidentally punish the wrong people, and then add more signals as the system ramps up. "These updates are the latest in a series of changes we're making to support a more trustworthy music ecosystem for artists, for rightsholders, and for listeners. We'll keep them coming as the tech evolves, so stay tuned," Spotify wrote.
[11]
Spotify's Attempt to Fight AI Slop Falls on Its Face
It's been clear for a while that a deluge of AI slop is drowning out real music and human artists on Spotify. The platform has become overrun by bots and AI-spun trickery, which have actively been scamming revenue from real bands. Earlier this year, a self-proclaimed "indie rock band" called The Velvet Sundown racked up millions of streams on the streaming service using AI-generated songs. Weeks later, the company was caught populating the profiles of long-dead artists with new AI-generated songs that have nothing to do with them. Now Spotify has finally acknowledged the problem, announcing new policies to protect artists against "spam, impersonation, and deception." "At its worst, AI can be used by bad actors and content farms to confuse or deceive listeners, push 'slop' into the ecosystem, and interfere with authentic artists working to build their careers," the company wrote. "That kind of harmful AI content degrades the user experience for listeners and often attempts to divert royalties to bad actors." Spotify head of marketing and policy Sam Duboff told reporters at a press briefing that 15 record labels and music distributors had committed to the changes already, The Verge reported. The company is also planning to roll out a new spam filter that can detect common tactics used by spammers to game Spotify's royalties system. "Left unchecked, these behaviors can dilute the royalty pool and impact attention for artists playing by the rules," the company wrote in its press release. But just one day later, a new AI scandal on Spotify showed the magnitude of the undertaking. The issue arose when an acclaimed and long-dormant side project by Bon Iver frontman Justin Vernon, called Volcano Choir, unexpectedly uploaded a new single called "Silkymoon Light" after being on hiatus for more than a decade. The problem was that the track clearly wasn't a real Volcano Choir song -- and bore obvious hallmarks of AI generation, like robotic vocals and a glitchy album cover. In other words, Spotify may be on board to clean up its platform, but the technical hurdles are clearly immense. The use of AI in the music industry has become a major point of contention, especially when it comes to impersonation. We've seen countless tracks featuring the cloned vocals of famous music artists go viral online, a trend that has already resulted in prolonged legal battles. According to Spotify's impersonation policy, the company says it will "remove music" if it's found to be replicating "another artist's voice without that artist's permission," even when it's labeled as being an "'AI version' of the artist." "Some artists may choose to license their voices to AI projects -- and that's their choice to make," the company's press release reads. "Our job is to do what we can to ensure that the choice stays in their hands." The company will be working with the Digital Data Exchange (DDEX), a not-for-profit dedicated to the creation of digital music value chain standards, to establish common "AI disclosures in music credits." "As this information is submitted through labels, distributors, and music partners, we'll begin displaying it across the app," the statement reads. It remains to be seen whether Spotify's new policies will stem a tidal wave of AI slop proliferating on its platform, let alone whether they'll be meaningfully enforced in the future. And it's not clear how Spotify will ferret out artists that don't cop to their use of AI. Initially, the Velvet Underground denied it was using the tech, but eventually updated its bio on Spotify, referring to itself as an "ongoing artistic provocation" that made considerable use of AI. Even after all that drama, The Velvet Sundown's music can still be streamed on Spotify. Some of its lazily generated songs have amassed over three million listens to date, generating royalties that could've easily gone to human musicians instead. And, though a Spotify spokesperson acknowledged the dodgy Volcano Choir song on a call, it currently remains live on the band's page.
[12]
Spotify moves to tackle AI abuse with transparency measures
Spotify on Thursday unveiled several measures to encourage artists and publishers to be more transparent about their use of artificial intelligence, as well as to limit certain abuses. The Swedish platform is recommending that musicians and producers comply with a new standard developed by the Digital Data Exchange (DDEX), a consortium of leading media companies, music licensing organizations, digital service providers and technology firms that develops standards for the creative industries. Since the beginning of the year, DDEX has allowed tracks to be labeled as entirely, partially, or not at all created with AI in their descriptions. Once these metadata are integrated, they'll be available "across Spotify," promised Sam Duboff, head of music marketing at the streaming platform. The issue gained prominence in June when an AI group called The Velvet Sundown suddenly went viral, with their most popular song surpassing three million streams on Spotify. The new labeling system operates on a voluntary basis, and Spotify does not require content uploaders to disclose AI's role in their production. "Initially, I think people's mindset was very much binary," explained Charlie Hellman, head of music at Spotify, during a presentation. "There's either AI music or there's not. But the reality is that we're now seeing this proliferation of so many different ways that AI is incorporated into all different steps of the tool chain." Spotify does not want to "punish artists for using AI authentically and responsibly," Hellman said. According to the company, more than 15 labels and distributors have already committed to comply with the DDEX nomenclature. Deezer is currently the only major audio platform to systematically flag tracks entirely generated by artificial intelligence. Regarding such tracks identified by Spotify as entirely created through generative AI, "their audience is minimal," Duboff said. "It's really a small percentage of streams. In general, when the music doesn't take much effort to create, it tends to be low quality and doesn't find an audience." The platform also announced Thursday that it had updated its rules to make clear that unauthorized AI use, including the creation of deepfakes or imitations without consent, is not permitted and such content would be removed.
[13]
Spotify Is Finally Trying to Combat AI Slop
Did you know you can customize Google to filter out garbage? Take these steps for better search results, including adding my work at Lifehacker as a preferred source. On a post on its blog this morning, Spotify announced it is doing something to combat the glut of AI-generated music on its streaming platform. According to the company, bad actors and content farms that "push 'slop' into the ecosystem" are going to be dealt with. Spotify says it has already removed over 75 million such tracks in the last year, and bigger changes are coming. Over the next few months, Spotify says it will crack down on musical impersonators, implement a new spam filtering system, and work with others in the music business to develop an industry standard for AI disclosures in music credits. According to Spotify, the availability of AI tools has allowed the easy creation of musical deepfakes -- AI impersonations of existing artists, in other words. The company says it will remove tracks that "impersonates another artist's voice without their permission -- whether that's using AI voice cloning or any other method." The ban includes both tracks for which the person uploading is explicitly presenting themselves as another artist and tracks labeled as an "AI version" of another artist -- unless the track was made with the original artist's permission, of course. Spotify is also targeting mass uploads, duplicates, SEO hacks, artificially short track abuse, and other spammy abuses of its platform. The company's new spam filter will be rolled out this fall and will identify uploaders and tracks engaging in these tactics, then "tag them and stop recommending them." The end goal, according to Spotify, is to prevent bad actors from generating royalties that could be otherwise paid out to professional artists and songwriters. Spotify has also pledged to help develop an overarching industry standard for disclosure of how artificial intelligence is used in the production of music. Labeling AI in music credits is a much more complex issue than Spotify's other new initiatives: all kinds of technology are used in music production, and there's a huge continuum between a track that's generated entirely from a prompt and using auto-tune on a slightly off-pitch vocal. Spotify says the effort requires "broad industry alignment" so it's working with companies like Labelcamp, NueMeta, Revelator, and SonoSuite through music industry standardization company DDEX to develop an industry standard for AI labeling. Spotify's new initiatives don't ban AI music, or require it to be labeled. The company says it wants to treat all music "equally, regardless of the tools used to make it," which seems to leave space for Spotify to continue promoting obviously AI-generated music playlists like "Jazz for Study" and "Lo Fi Chill" that consist mainly of "artists" like The Midtown Players, ourchase, and The Tate Jackson Trio that have all the signs of being AI-creations, but are officially "verified" by Spotify. To be fair to the music streaming service, I did a similar search for AI playlists and musicians a few months ago, and it's marginally more difficult to find now than it was then, but until Spotify stops filling its own playlists with AI-generated glurge, its pledge to fight "AI slop" rings hollow.
[14]
Spotify launches music spam filter to kill off AI slop flooding playlists
TL;DR: Spotify is implementing new filters to aggressively protect artists from the negative impacts of generative AI, ensuring their creative rights and content integrity remain secure amid evolving AI technologies. This move highlights Spotify's commitment to safeguarding artists in the digital music landscape. The rise of artificial intelligence-powered tools is leading to the creation of AI-generated art, which is making its way onto platforms that human artists currently dominate. The Velvet Sundown One of the most notable examples of AI-generated art making waves on a platform is the band The Velvet Sundown, a completely AI-generated band that exploded in popularity with the release of their song "Dust On the Wind". The song attracted millions of streams on Spotify, and the band now has nearly 300,000 monthly listeners. With AI tools being so sophisticated now, Dust On the Wind sounds like a song that was made by real people, which has sparked widespread concern about how listeners are meant to tell if the music they are listening to is made by real people or an AI in a matter of seconds. In response to these concerns, Spotify is implementing a new filter that forces music publishers to label tracks that are generated with artificial intelligence. Additionally, Spotify is adding a new spam filtering system, AI disclosures, and "improved enforcement of impersonation violations" such as deepfakes. Notably, Spotify worked with the Digital Data Exchange, a standards-setting organization in the music industry, to create the new "industry standard for AI disclosures in music credits" as simply adding an AI-generated label, or "Not AI" label wouldn't solve the problem, and probably make it worse as many artists responsibly use AI as a tool in the creation of their songs, but not entirely. "This standard gives artists and rights holders a way to clearly indicate where and how AI played a role in the creation of a track-whether that's AI-generated vocals, instrumentation, or post-production. This change is about strengthening trust across the platform. It's not about punishing artists who use AI responsibly or down-ranking tracks for disclosing information about how they were made," Spotify wrote in its press release
[15]
Spotify embraces creative AI while cracking down on fraud
Spotify Technology SA is embracing artificial intelligence tools for creating content, but cracking down on fraudulent impersonation and spamming. The Swedish streaming giant is introducing new policies and increasing investments to protect artists and listeners against AI generated "slop" that floods the platform, degrading the listening experience and diverting artist royalty payments to bad actors. Spotify said it has strengthened protections for artists and is giving clearer recourse for claims of unauthorized vocal impersonation, or deepfakes that use AI to create a replica of an artist's voice. The company is also rolling out a new spam-identification system that will proactively flag and stop recommending tracks that appear to game the royalty payment system. Spotify has battled for years against bogus tracks that siphon off billions of dollars to rightsholders. But the arrival of generative AI startups like Suno and Udio has increased the stakes with software that can mimic an artist's voice, create new tracks that sound real and turn up in playlists. In the past 12 months alone, Spotify removed more than 75 million "spammy tracks" from the platform, the company said. Last year, the US Justice Department charged a North Carolina man with music streaming fraud after he allegedly created hundreds of thousands of songs with the help of AI and used automated bots to drive streams to those songs, reaping $10 million from his efforts. "AI has accelerated our urgency and changed some of the tactics we need to use," said Charlie Hellman, Spotify's global head of music vertical. "But our principles and our priorities around this area really haven't changed. Protecting artists and their identities, keeping the platform experience clean and high quality and helping listeners have a better transparent experience into the music that they love, so they know what they're getting." AI is a highly sensitive topic in the music industry and figuring out how to cope with the technology is a rare point of agreement among artists and record companies. Major labels have sued Suno and Udio over copyright infringement, yet the market continues to be flooded with AI-generated music. This summer, Spotify's algorithm began promoting the music of a new group known as Velvet Sundown, a mysterious artist that appeared to be AI-generated and garnered a significant number of streams. Spotify was criticized for promoting the group and for not labeling it as AI-generated. The group's Spotify biography has been updated to say it's created in part by AI. Despite growing concerns, Spotify won't ban outright or discourage AI-generated music, except for tracks that don't comply with its spam or fraud policies. The company is, however, encouraging its partners to disclose how and when they implement AI, but the policy is opt-in, not required, leaving little incentive for labels and artists to share when they've used the technology. Spotify plans to support a new industry standard, developed through DDEX, a digital standards organization, that will introduce a disclosure for AI content across music platforms. Some partners have already committed to this standard, including distributors CD Baby and DistroKid, as well as independent labels Believe, Empire, and Downtown Artist & Label Services. A Spotify spokesperson said the platform is in talks with the major labels who are "broadly supportive." Hellman said artists use AI in various capacities that aren't nefarious -- ChatGPT might write the lyrics while a human plays the instrument or vice versa -- and the line of what counts as an AI-generated song is still blurry. Spotify won't penalize people who use it as a legitimate tool. Sam DuBoff, global head of marketing & policy for Spotify for Artists, said the company hasn't seen songs that are wholly AI-generated take off. "In general, when the music doesn't take much effort to create, it tends to be of low quality, it doesn't tend to find an audience," he said.
[16]
Spotify Embraces AI Music With New Policies, While Combating 'Spam' and 'Slop'
Under new rules Spotify announced today, an AI "band" like the now-infamous Velvet Sundown will still be allowed on the service, but would be encouraged to label itself properly from the start. Overall, Spotify has zero intention of eliminating AI-generated music from its service, execs said Tuesday, in a press conference announcing its new guidelines. At the same time, the company said it's waging war against a flood of low-quality AI content, removing more than 75 million "spammy" tracks in the past 12 months alone. There's no question that in the wake of the rise of services such as Suno -- which allow near-instant generation of new songs -- AI-generated music is deluging streaming services. Spotify competitor Deezer has said that approximately 28 percent of daily uploads are fully AI-generated music, though those tracks account for a mere 0.5 percent of actual streams. But even as Spotify aims to curb the impact of that onslaught, they're signaling that AI music is here to stay. "We're not here to punish artists for using AI authentically and responsibly," said Charlie Hellman, Spotify's VP global head of music product. "We hope that artists' use of AI production tools will enable them to be more creative than ever." Spotify is most concerned about "mass uploads, duplicates, SEO hacks, artificially short track abuse, and other forms of slop," according to a blog post from the company. Accordingly, it's rolling out a new spam filter to flag uploaders engaging in those practices, and thus "help prevent spammers from generating royalties that could be otherwise distributed to professional artists and songwriters." The company will stop short of removing those tracks, though, instead simply making them ineligible for recommendation by the streamer's algorithm. The company will still have no rules in place against boosting AI-generated songs in general. The platform will encourage, but apparently not mandate, that artists label their AI usage via a new industry standard developed through DDEX -- a long-standing non-profit that creates technical standards for song metadata across platforms. The idea is that artists will specify their precise uses of generative AI, ranging from fully prompt-generated songs to human-made songs with AI-tweaked lyrics. The approach treats AI use as "a spectrum, not a binary," said Sam Duboff, Spotify's global head of marketing and policy, music business. The new policies also include more explicit bans on unauthorized AI voice clones and deepfakes. "Some artists may choose to license their voice to AI projects -- and that's their choice to make," Duboff said. "Our job is to do what we can to ensure that the choice stays in their hands." At the same time, the company says it's aiming to be more vigilant about "profile mismatches," where fraudsters upload content under the name of real artists, often famous ones. This summer, the Velvet Sundown, a fake AI-generated band that initially didn't acknowledge its nature, amassed over a million monthly listeners, while appearing to benefit from algorithmic promotion on Spotify. The band eventually admitted in an updated bio that it was "a synthetic music project guided by human creative direction, and composed, voiced, and visualized with the support of artificial intelligence." Duboff suggested the "band" might have gained much less notoriety if it was properly labeled from the start. "I think the kind of news cycle, the fan interest, would've been really different," he said.
[17]
Spotify moves to tackle AI abuse with transparency measures - The Economic Times
Spotify on Thursday unveiled several measures to encourage artists and publishers to be more transparent about their use of artificial intelligence, as well as to limit certain abuses. The Swedish platform is recommending that musicians and producers comply with a new standard developed by the Digital Data Exchange (DDEX), a consortium of leading media companies, music licensing organisations, digital service providers and technology firms that develops standards for the creative industries. Since the beginning of the year, DDEX has allowed tracks to be labeled as entirely, partially, or not at all created with AI in their descriptions. Once these metadata are integrated, they'll be available "across Spotify," promised Sam Duboff, head of music marketing at the streaming platform. The issue gained prominence in June when an AI group called The Velvet Sundown suddenly went viral, with their most popular song surpassing three million streams on Spotify. The new labeling system operates on a voluntary basis, and Spotify does not require content uploaders to disclose AI's role in their production. "Initially, I think people's mindset was very much binary," explained Charlie Hellman, head of music at Spotify, during a presentation. "There's either AI music or there's not. But the reality is that we're now seeing this proliferation of so many different ways that AI is incorporated into all different steps of the tool chain." Spotify does not want to "punish artists for using AI authentically and responsibly," Hellman said. According to the company, more than 15 labels and distributors have already committed to comply with the DDEX nomenclature. Deezer is currently the only major audio platform to systematically flag tracks entirely generated by artificial intelligence. Regarding such tracks identified by Spotify as entirely created through generative AI, "their audience is minimal," Duboff said. "It's really a small percentage of streams. In general, when the music doesn't take much effort to create, it tends to be low quality and doesn't find an audience." The platform also announced Thursday that it had updated its rules to make clear that unauthorized AI use, including the creation of deepfakes or imitations without consent, is not permitted and such content would be removed.
[18]
Spotify Is Fighting Back Against AI Music Slop with Stricter Policies
Moreover, Spotify is working on a new industry-wide standard to ensure transparency around how AI is used in the music creation process. Spotify has been seeing a surge in AI-generated song uploads, and is now working with DDEX to create a new metadata standard to bring more transparency to AI use in music creation. The company is also tightening its policies around AI music content on the platform. A couple of weeks after the announcement of Spotify lossless audio, the music streaming giant is now taking steps to limit the growing spread of AI music and impersonations. Spotify announced in its blog post on Thursday that it's taking three key necessary actions to curb the excessive spread of AI slop on its platform. Starting with stricter impersonation policies, which will provide artists with "stronger protections and clearer recourse." Spotify is also improving its content-mismatch process to prevent attempts where uploaders try to place AI-generated music under an artist's profile. The streaming service will work with artist distributors to report content mismatch even before release. Spotify will also introduce a new music spam filter. It will track users who upload music in bulk, likely generated using AI, and stop recommending their content. The aim is to recommend artists who "play by the rules," and provide them with a fair payout. Lastly, Spotify is working with DDEX, an association for setting standards, to create a new metadata framework to identify in which part of the music-making process AI was used. The goal is not to prevent the use of AI, but to be transparent about it with the listeners. Given the rapid spread of AI in today's world, it is impossible to avoid its use. So, having a transparent standard will help maintain user trust across streaming services. 15 labels and music distributors have already committed to adopting this new AI disclosure standard, according to Spotify. The streaming service is not outrightly banning the use of AI since it has its own AI DJ feature, but it's trying to protect the authenticity of artists in this increasingly AI-driven world.
[19]
Spotify Rolls Out New Filters, Disclosure Rules for AI Content | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. The goal is to protect artists and maintain listener trust while still allowing creative uses of AI, according to a Thursday (Sept. 25) press release. Spotify removed more than 75 million spam tracks in the past year, many of them ultra-short or duplicate files uploaded to game royalty rules, the release said. The company will now introduce a "music spam filter" that tags suspicious uploads and suppresses them in recommendation systems instead of deleting them outright. The filter will use signals such as mass uploads, duplicate or near-duplicate audio, SEO-heavy titles, and tracks with little coherence. Because generative tools are improving quickly, Spotify will take a cautious rollout and refine the filter as abuse patterns change, according to the release. To bring more clarity, Spotify is adopting the Digital Data Exchange (DDEX) metadata standard, an industry framework for sharing metadata consistently across labels, distributors and streaming platforms. Tracks must now disclose if AI was used in vocals, instrumentation or postproduction. These disclosures will appear through Spotify's existing metadata channels, but they will not automatically reduce a track's visibility, the release said. The impersonation rules are also stricter. Vocal cloning or voice impersonation without consent is banned. Spotify will also target content mismatches, where uploads are falsely attributed to other artists. The company is working with distributors to block such uploads before release and has improved reporting tools so that rights holders can act more quickly, according to the release. Spotify's policy shift mirrors broader industry concerns about the role of AI in music. The company has drawn a line between AI as a creative tool and AI as a source of fraud, noting that the company "won't ban outright or discourage AI-generated music" but is instead focusing on spam and impersonation, Bloomberg reported Thursday. Meanwhile, Billboard reported Thursday that Spotify is addressing spam and impersonation as the "worst offenders." The changes arrive as regulators pay closer attention to how platforms use AI and how their algorithms influence competition. In other sectors, lawmakers have raised questions about whether major platforms act as gatekeepers and competitors. Similar scrutiny could extend to music if filtering and metadata rules affect which artists are promoted or sidelined. On the commercial side, Spotify is also signaling that trust and identity matter beyond music content. Sandra Alzetta, Spotify's vice president of commerce and customer service, told PYMNTS in an interview this month that the company views how users pay as nearly as important as what they play. That focus suggests that user credibility and platform reliability are becoming linked across commerce and content. For now, Spotify's policies show how the music industry is beginning to set guardrails for AI use. The company is positioning itself as willing to host AI-assisted creativity but unwilling to allow AI to undermine the integrity of its platform.
[20]
Spotify Removes 75 Million "Spammy" Songs, Cracks Down On AI Use by "Bad Actors"
Oscars: Australia Picks 'The Wolves Always Come at Night' as International Feature Submission Spotify is set to strengthen AI protections for artists and music producers with a series of measures, including improved enforcement of impersonation violations, a new spam filtering system and AI disclosures for music with industry-standard credits. The music streaming giant made the announcement in a "For the Record" post on its website on Thursday, noting that it had removed 75 million "spammy" tracks. "The pace of recent advances in generative AI technology has felt quick and at times unsettling, especially for creatives," the post begins. "At its best, AI is unlocking incredible new ways for artists to create music and for listeners to discover it. At its worst, AI can be used by bad actors and content farms to confuse or deceive listeners, push "slop" into the ecosystem, and interfere with authentic artists working to build their careers. That kind of harmful AI content degrades the user experience for listeners and often attempts to divert royalties to bad actors." Spotify adds, "The future of the music industry is being written, and we believe that aggressively protecting against the worst parts of Gen AI is essential to enabling its potential for artists and producers. We envision a future where artists and producers are in control of how or if they incorporate AI into their creative processes. As always, we leave those creative decisions to artists themselves, while continuing our work to protect them against spam, impersonation, and deception, and providing listeners with greater transparency about the music they hear." Regarding specifics, on the issue of impersonation, Spotify has committed itself to stronger rules and better enforcement. "We've introduced a new impersonation policy that clarifies how we handle claims about AI voice clones (and other forms of unauthorized vocal impersonation), giving artists stronger protections and clearer recourse," the company says. "Vocal impersonation is only allowed in music on Spotify when the impersonated artist has authorized the usage." Additionally, Spotify said it was ramping up "investments to protect against another impersonation tactic -- where uploaders fraudulently deliver music (AI-generated or otherwise) to another artist's profile across streaming services." Additionally, the company said it was "testing new prevention tactics with leading artist distributors to equip them to better stop these attacks at the source." Spotify hopes its new spam filtering measures will cut down on issues such as "mass uploads, duplicates, SEO hacks, artificially short track abuse, and other forms of slop" that have all become easier and more prevalent due to AI tools. The new spam filter "will identify uploaders and tracks engaging in these tactics, tag them, and stop recommending them." The company says it will roll out a new music spam filter over the coming months and will be careful not to penalize the wrong uploaders. The third measure Spotify has introduced is AI disclosures for music with industry-standard credits. With AI increasingly being used in the music industry, the company wants to increase transparency of its use. "We know the use of AI tools is increasingly a spectrum, not a binary, where artists and producers may choose to use AI to help with some parts of their productions and not others. The industry needs a nuanced approach to AI transparency, not forced to classify every song as either "is AI" or "not AI"," says Spotify. Spotify says it will help develop and support new industry standard for AI disclosures in music credits that are being developed through the Digital Data Exchange, the international standards-setting organization. This AI disclosure information will be displayed across the Spotify app. Spotify's new AI crackdown comes despite the company embracing AI in other aspects of its business. In February, Spotify said it will accept more AI-narrated audiobooks on its platform through a partnership with ElevenLabs. Still, the new AI measures will be welcomed by the major labels and fans after a number of recent news reports of undeclared AI artists racking up thousands of streams on Spotify. In July, The Guardian reported that the band The Velvet Sundown released two albums and accrued over one million streams on Spotfiy before it was revealed the band and its music was AI generated. "We welcome Spotify's new AI protections as important steps forward consistent with our longstanding Artist Centric principles," a Univeral Music Group spokesperson told The Hollywood Reporter. "We believe AI presents enormous opportunities for both artists and fans, which is why platforms, distributors and aggregators must adopt measures to protect the health of the music ecosystem in order for these opportunities to flourish. These measures include content filtering; checks for infringement across streaming and social platforms; penalty systems for repeat infringers; chain-of-custody certification and name-and-likeness verification. The adoption of these measures would enable artists to reach more fans, have more economic and creative opportunities, and dramatically diminish the sea of noise and irrelevant content that threatens to drown out artists' voices."
Share
Share
Copy Link
Spotify announces significant updates to its AI policy, introducing measures to label AI-generated music, filter spam, and prevent unauthorized voice clones. The move aims to balance the creative potential of AI with the need to protect artists and listeners.
Spotify, one of the world's leading music streaming platforms, has announced a series of significant updates to its AI policy. These changes are designed to address the growing influence of artificial intelligence in music creation and distribution, while also protecting the interests of artists and listeners
1
.Source: ZDNet
A key component of Spotify's new policy is the adoption of an industry standard for identifying and labeling AI-generated music. The company is collaborating with DDEX (Digital Data Exchange) to implement a standardized system for AI disclosures in music credits
1
. This system will provide detailed information about the use of AI in various aspects of music production, including vocals, instrumentation, and post-production .Sam Duboff, Spotify's Global Head of Marketing and Policy, emphasized that this approach allows for more nuanced disclosures, recognizing that AI use in music creation exists on a spectrum rather than a binary classification
1
.To address the increasing challenge of AI-generated spam content, Spotify is introducing a new music spam filter. This system, set to roll out in the fall, will identify and tag tracks and uploaders engaging in spam tactics, such as mass uploads, duplicates, and SEO manipulation
1
4
.The company revealed that it has already removed over 75 million 'spammy' tracks in the past year alone, highlighting the scale of the problem
4
. Spotify plans to implement this filter gradually to ensure it targets the right signals without penalizing legitimate content1
.Source: Economic Times
Spotify is taking a strong stance against unauthorized AI voice clones and impersonation. The new policy explicitly prohibits deepfakes and any form of vocal replicas or impersonation without the artist's permission
2
. This measure aims to protect artists from digital identity theft and maintain the integrity of their work on the platform .Source: Android Police
Related Stories
While implementing these protective measures, Spotify executives emphasized their continued support for responsible AI use in music creation. Charlie Hellman, Spotify's VP and Global Head of Music, stated, "We're not here to punish artists for using AI authentically and responsibly. We hope that artists' use of AI production tools will enable them to be more creative than ever"
1
.Spotify's policy changes come at a time when AI-generated music is rapidly increasing across the industry. The company's approach to addressing these challenges could set a precedent for other streaming services and shape the future of AI in music production and distribution
1
2
.As the music industry continues to grapple with the implications of AI technology, Spotify's new policies represent a significant step towards creating a more transparent and fair ecosystem for both artists and listeners in the age of artificial intelligence.
Summarized by
Navi
[5]
26 Aug 2025•Technology
17 Jul 2025•Technology
16 Oct 2025•Technology
1
Technology
2
Business and Economy
3
Business and Economy