2 Sources
2 Sources
[1]
This invisible technique poisons songs so AI can't clone them
My Music My Choice alters songs so they sound normal to fans but become unusable nonsense when fed into generative AI systems. Last year, AI clones of Bad Bunny and Drake flooded streaming platforms. Listeners couldn't tell the real tracks from synthetic soundalikes. The music industry has been scrambling for answers ever since. Researchers at Binghamton University and the startup Cauth AI think they've found one. It's called My Music My Choice, or MMMC, and it works differently from most copyright tools. Instead of catching fakes after they appear, this method lets artists poison their recordings before release. The audio reaches human ears just fine. But voice cloning models hear nothing but garbage. Here's how the poisoning actually works The system targets a song's waveform. My Music My Choice adds microscopic alterations so subtle that you'll never notice them. Play the track on Spotify and it sounds exactly like the master recording. Recommended Videos But feed that file into cloning software and everything breaks. The shifts confuse the algorithm, making the protected vocals read as a completely different performance. When the tool tries to replicate the voice, it only produces distorted static. The goal is to minimize the impact on human listeners while maximizing disruption for the machines. Artists could apply this protection during production and release with confidence that cloning software won't work. Why last year's wave made this urgent Bad Bunny drops a new track and within hours the internet fills with studio-quality versions sung by anyone. Generative AI made that scenario real in 2025. Fans couldn't tell what was authentic anymore. Beyond the copyright chaos, artists watched their identities get borrowed without permission. People are using voice cloning for fun but also for nefarious purposes, Ciftci said, grabbing someone's voice and making them sing things they never would. The emotional toll and lost revenue piled up fast. Musicians needed a way to shut it down before it starts. MMMC finally gives them that. What's next for artists and the tool The team tested MMMC on 150 tracks across multiple genres and plans to scale up. They also want to compare it with similar methods, though they admit there aren't many out there yet. For musicians watching this space, the message is clear. Protection is coming before the clone, not after. Watch for wider testing as the team scales up.
[2]
Deepfake Songs Are Exploding. This Tool Shuts Them Down. | Newswise
Newswise -- Artificial intelligence models can now clone a voice with just a few seconds of audio, fueling a surge of deepfake songs online and creating a growing crisis for musicians who don't want their voices hijacked. Beyond the obvious intellectual property rights issue, this can lead to lost revenue and take an emotional toll on artists who put their heart and soul into their songs. But researchers have a solution. In collaboration with the startup company Cauth AI, faculty and students at Binghamton University, State University of New York have developed My Music My Choice (MMMC), a digital safeguard that empowers artists by protecting their songs from generative AI cloning. Consider this scenario: Bad Bunny has just released a new song, but suddenly, the internet is flooded with countless studio-quality versions sung by famous/infamous people around the world, thanks to generative AI. With everyone able to produce their own high-quality version of the song, even the most diehard fans of Bad Bunny would be hard-pressed to tell the real track from a synthetic imitation. Umur Aybars Ciftci, a research assistant professor in the First-Year Research Immersion Program at Binghamton University, and his collaborator, Ilke Demir, CEO and founder of Cauth AI, want to stop that from happening to today's artists. "Even though this AI technology has been developed for fun and entertainment, a lot of people are using it for nefarious purposes," said Ciftci. "You can easily take someone's voice and make them sing something that they normally don't sing, or steal someone's songs and make it look like it is your song to begin with." My Music My Choice works by adding small, imperceptible changes to a song's waveform. When you play the song back, the vocal will sound exactly the same to your ears. But when an AI model tries to replicate the song, it will only produce distorted noise. From the AI model's perspective, the slight shifts made by My Music My Choice make the protected audio sound like a completely different vocal track - and the AI model struggles to replicate it. "Collaborating with disruptive startups like Cauth AI provides us with a unique vantage point into the front-line challenges of the industry, essentially bridging the gap between lab-scale concepts and industrial-scale impact. Our goal is to build a model that figures out exactly which tiny modifications to introduce so that people hear no difference at all, while AI voice-cloning systems are thrown off," said Ciftci. "In other words, we're trying to minimize the impact on human listeners while maximizing disruption for the machines." If you're a musician with a new track, Ciftci said, this is something you could apply to a song before releasing it to protect it from AI voice cloning. The researchers tested the tool on 150 music tracks across multiple genres, and they will continue testing this system on larger data samples. They also want to compare My Music My Choice with similar methods, though Ciftci said there aren't many out there. Binghamton students Gerald Pena Vargas, Alicia Unterreiner and David Ponce contributed to this research. The paper, "My Music My Choice: Adversarial Protection Against Vocal Cloning in Songs," was presented at the 39th Conference on Neural Information Processing Systems (NeurIPS 2025) Workshop: AI for Music.
Share
Share
Copy Link
Researchers at Binghamton University and Cauth AI developed My Music My Choice, a digital safeguard that adds imperceptible changes to a song's waveform. The invisible audio technique lets artists protect their recordings before release—tracks sound normal to fans but become unusable when fed into AI voice cloning systems. Tested on 150 tracks, the tool addresses the surge of deepfake songs and copyright infringement that flooded platforms in 2025.
A wave of AI-generated music clones swept streaming platforms in 2025, with synthetic versions of tracks by Bad Bunny and Drake becoming indistinguishable from authentic recordings
1
. The surge exposed musicians to copyright infringement and identity theft while leaving fans unable to separate real performances from deepfake songs. Researchers at Binghamton University and startup Cauth AI responded by developing My Music My Choice (MMMC), a digital safeguard that empowers artists to protect their work before it reaches the public2
.The system works through audio poisoning—adding imperceptible changes to a song's waveform that human listeners cannot detect but that completely disrupt AI voice cloning models
1
. When played on streaming services, protected tracks sound identical to master recordings. But when the same files are fed into cloning software, the microscopic alterations confuse the algorithm, causing it to interpret the protected vocals as an entirely different vocal track2
. Instead of replicating the artist's voice, the system produces only distorted static.Umur Aybars Ciftci, research assistant professor at Binghamton University, and Ilke Demir, CEO of Cauth AI, designed My Music My Choice to minimize impact on human listeners while maximizing disruption for machines
2
. The adversarial protection against vocal cloning targets the specific ways AI models process audio. "We're trying to figure out exactly which tiny modifications to introduce so that people hear no difference at all, while AI voice-cloning systems are thrown off," Ciftci explained2
.Artists can apply this protection during production and release recordings with confidence that preventing AI replication of an artist's vocal track will succeed from the start
1
. The approach differs fundamentally from existing copyright tools that attempt to catch fakes after they appear online. By poisoning recordings before distribution, musicians gain control over how generative AI systems interact with their work.The research team tested the tool on 150 music tracks spanning multiple genres, demonstrating its versatility across different musical styles
2
. They plan to scale up testing with larger data samples and compare My Music My Choice with similar methods, though Ciftci acknowledged that few alternatives currently exist in this space2
.The urgency stems from real-world consequences musicians faced throughout 2025. "People are using voice cloning for fun but also for nefarious purposes," Ciftci said, describing how bad actors grab someone's voice and make them sing things they never would
1
. Beyond intellectual property rights violations, artists experienced emotional tolls and lost revenue as their identities were borrowed without permission2
.Related Stories
The research, titled "My Music My Choice: Adversarial Protection Against Vocal Cloning in Songs," was presented at the 39th Conference on Neural Information Processing Systems (NeurIPS 2025) Workshop: AI for Music
2
. Binghamton students Gerald Pena Vargas, Alicia Unterreiner and David Ponce contributed to the work2
.Collaboration between academic researchers and Cauth AI provides insight into front-line industry challenges, bridging the gap between laboratory concepts and industrial-scale impact
2
. As the team continues wider testing and refinement, the message for musicians remains clear: protection against AI cloning now arrives before the clone appears, not after copyright chaos has already unfolded1
. Artists should monitor developments as this digital safeguard moves toward broader availability, potentially reshaping how the music industry defends against unauthorized AI-generated music clones.Summarized by
Navi
[1]
24 Oct 2024•Technology

08 Dec 2025•Entertainment and Society

07 Apr 2025•Technology

1
Policy and Regulation

2
Technology

3
Technology
