Current commercial software can only separate vocals, bass, and drums. Audiostrip has developed a solution that can simultaneously separate a wider range of musical elements, including individual instruments, while maintaining usable quality. This technology, as seen with the 2023 release of The Beatles’ “Now and Then” featuring John Lennon’s extracted vocals, has far-reaching implications, enabling new remixes, educational uses, and the recovery of lost recordings.

Everything you need to know:
✓ AudioStrip awarded UK government grant to advance AI music source separation
✓ Partnership with Queen Mary University to develop cutting-edge algorithms
✓ Goal to extract individual instruments and vocals while maintaining high quality
Audiostrip wins UK gov’t grant for AI music innovation
Audiostrip, a pioneering company specializing in AI-driven music source separation, has been awarded a significant grant from the UK government’s £1 million “AI in the Music Industry” Innovate UK Fund. The project, titled “Fine-grained music source separation with deep learning models,” aims to push the boundaries of what’s possible in extracting individual instruments and vocals from music files.
Read more: The best 5 AI stem separation tools in 2024
The competition assessors recognized the immense potential of Audiostrip’s innovation, stating that it is a “well planned, resourced and researched innovation that can impact the business, market and wider industry in the field of AI and music separation” and that “the rewards could be significant.” This funding will enable Audiostrip to advance the development of AI products and services that can benefit the entire music supply chain.
As part of the project, Audiostrip will be collaborating with the world-renowned C4DM Queen Mary University of London to develop state-of-the-art AI algorithms for music source separation. The goal is to create a tool that can automatically detect and extract individual elements from a song, including vocals, drums, bass, piano, electric and acoustic guitar, and synthesizer, while maintaining the highest quality.
“This technology is sweeping the music industry. AudioStrip will offer more advanced tools for precise separation of individual elements in audio files. By partnering with Queen Mary, we aim to elevate music source separation technology beyond industry benchmarks, making it an indispensable tool for DJs, independent artists, producers, and licensors.”
Basil Woods, Co-Founder and CEO of AudioStrip
Audiostrip & Queen Mary Uni to push AI music separation limits
Current commercial software can only separate vocals, bass, and drums at an acceptable quality. There is no product on the market that can simultaneously separate more instruments while maintaining usable quality. Audiostrip’s partnership with Queen Mary University aims to fill this gap and push the limits of what AI can do for music production and manipulation.
The implications of this technology are far-reaching. It could potentially allow for the creation of new remixes, the isolation of specific instruments for educational purposes, or even the recovery of lost recordings, as seen with the 2023 release of The Beatles’ “Now and Then” featuring John Lennon’s extracted vocals.
Simon Dixon, Director of the UKRI Centre for Doctoral Training in Artificial Intelligence and Music at Queen Mary University of London, expressed enthusiasm for the partnership, highlighting the multidisciplinary expertise of the university’s Centre for Digital Music and its track record of successful collaborations with businesses of all sizes.
As Audiostrip and Queen Mary University embark on this groundbreaking project, the music industry eagerly awaits the innovative tools that will emerge, promising to reshape the way we create, manipulate, and experience music in the digital age.