Technologies called Harmony Cloak and Poisonify let musicians add special noise to their music that humans can’t hear but ruins the files for AI training. These technologies can potentially damage entire AI models that try to use protected music, giving musicians new technical tools to fight back against AI companies using their work without permission.
Musician and tech entrepreneur Benn Jordan recently shared a fascinating YouTube video about how artists can protect their music from being used to train AI systems. The video explains how musicians can “poison-pill” their audio files to prevent AI companies from using their work without consent or payment.
AI music generation is growing at a fast pace
Screenshot
Jordan explains that AI music generation has grown rapidly since about 2015, with platforms like Suno even ranking among the top generative AI apps according to the latest Andreessen Horowitz report. Companies like Suno, which recently faced a lawsuit for allegedly using hits like ‘Mambo No. 5’ without paying royalties, and Miniax Audio have raised millions in funding while often using copyrighted music without permission to train their systems.
When asked what data they used for training, these companies typically avoid answering because it could expose them to legal trouble. Many have scraped music from platforms like Spotify and YouTube regardless of copyright status.
How Adversarial Attacks in Audio (and Images) Works
The protection method uses “adversarial noise”—a technology that exploits differences between how AI and humans process sound. AI systems analyze spectrograms (visual representations of sound) when interpreting audio.
It works for images too:
Adversarial noise works for images too
By adding carefully designed noise patterns that humans can’t hear, musicians can make their files confusing or useless to AI while sounding normal to listeners. For example, the noise might make a guitar sound like a trumpet to AI while humans still hear a guitar.
Harmony Cloak Technology preventing AI copying
Jordan describes Harmony Cloak, a new AI tool designed specifically to shield musicians’ work from AI copying, as a technology developed by University of Tennessee Knoxville researchers. It adds specific noise that disrupts an AI’s ability to detect melody and rhythm.
When AI models train on protected files, they fail to learn musical patterns effectively. Jordan notes that training metrics flatline when these files are used, showing the AI isn’t improving its understanding.
Targeting instrument classification systems with Poisonify
Poisonify works similarly but targets instrument classification systems. This matters because AI music generators need to identify instruments accurately to create convincing music.
Jordan explains that files protected with Poisonify cause AI to misidentify instruments—for example, hearing a piano as a flute. The more protected files an AI trains on, the worse it gets at identifying instruments in all music.
Testing Protected Audio Files
Screenshot
Jordan tested these protection methods on several AI music platforms, including Suno, which has recently been integrated into Amazon’s Alexa+ despite ongoing legal and ethical controversies:
- He uploaded original unprotected songs to services like Suno and Meta’s MusicGen.
- Then uploaded versions protected with Harmony Cloak and Poisonify.
The results were dramatic — Suno produced poor quality extensions of protected tracks (he described it as “music from an airport spa that somebody downloaded off Napster in 1999”), Miniax Audio created what he called “nightmare fuel,” and Meta’s MusicGen simply crashed.
The Practical Challenges of using Adversarial Attacks
Jordan acknowledges several practical issues with these methods:
- Processing requirements: It takes significant computing power—processing an 18-second file with Poisonify takes about 2 hours on two high-end GPUs.
- Electricity costs: Processing a full album could cost $40-$150 in electricity depending on location.
- Impact on recommendation algorithms: Since music platforms use instrument recognition for recommendations, protected music might appear in unrelated playlists.
Making Protection More Accessible
Jordan discusses efforts to make these tools more widely available:
- Developing more efficient algorithms that require less computing power.
- Creating services that could integrate with music distribution platforms.
- Working with distributors to offer AI-proofing as an optional service.
Conclusion
Jordan’s video shows that musicians now have technical options to protect their work from unauthorized AI use. While these methods still face practical challenges, they represent an important step toward giving artists more control over their intellectual property.