Musical innovations are no stranger to technology. From recording devices to synthesizers to digital music players, new technologies have continuously shaped how we experience and create sounds.
However, a new technological force is emerging that has the potential to revolutionize music in unprecedented ways – artificial intelligence.
What is AI?
Artificial intelligence, or AI for short, refers to computer systems that can perform tasks that normally require human intelligence, like visual perception, decision-making, and language translation. In recent years, major leaps have been made in AI capabilities thanks to advances in machine learning – the development of systems that can learn from data without being explicitly programmed. This has allowed AI to enter new domains like music where it is augmenting human creativity rather than replacing it.
Use cases for AI in music
While AI may seem foreign or intimidating to many, its applications in music are quite approachable when broken down. Think of it this way – AI systems are like having an endless supply of talented assistants and collaborators at your disposal. Some AI programs act as a “composer’s apprentice,” helping craft drums loops, harmonies or fill in melodic gaps based on your input. Others analyze vast music libraries to generate new sounds and styles in a similar vein. In either case, AI isn’t replacing people but empowering them with new creative tools.
This complementary relationship between humans and AI is seen in many projects blurring the line between man and machine.
Example 1: Hologram Cellist Yves Dhar
Take cellist Yves Dhar’s collaboration with the AGNES AI system as part of Adam Schoenberg’s “Automation” piece. On stage, Dhar and a hologram of himself engaged in a back-and-forth inspired by the AI’s pre-generated score based on Schoenberg’s style. Rather than replace Dhar, the AI part pushed his abilities – demonstrating how intelligent systems can provoke new artistic heights. Other artists like Grammy winner Dana Leong are using AI as studio tools, viewing them as collaborators that can perform music or even design visual/performance components.
Example 2: AI can help generate sound effects for commercials
You may have even heard AI-assisted tracks without realizing it. “Potato chip crunching” sound effects made for commercials by Miami’s Animal Music Studios utilized Google’s AI to weave chip noises into instrumental loops, saving hours of tedious manual work. AI isn’t just for experimental performances – it has practical applications enhancing everyday media too. Major tech firms are also releasing accessible AI music programs. For instance, Google’s experimental app lets you sing lyrics which are recreated by AI in your own voice, empowering non-musicians to easily express themselves musically.
Example 3: AI to continue a melody based on what comes before
As AI progresses, the boundaries between human and machine composers may further blur. Stanford researcher John Thickstun created an AI model called the Anticipatory Music Transformer that can not only continue a melody based on what comes before but also synchronize with what follows. This offers a glimpse into AI writing full pieces independently in the future.
However, most experts agree that AI alone won’t satisfy listeners.
Example 4: San Francisco Symphony uses AI
Major orchestras like the San Francisco Symphony are exploring these concepts via performances blending AI harmonies with live instrumentalists. Even entire dance productions integrate AI, like San Francisco Ballet premiering new works inspired by AI themes alongside music partially generated through technology.
People hate listening to AI music
Google researcher Chris Donahue, who worked on music generation projects, notes “I’ve seen no indication that people want to listen to these things generated in a vacuum without any input from people.” As such, human oversight will likely remain an integral part of the creative process for a long time to come.
Some see positives in AI pushing artistic boundaries, as noted by cellist Yves Dhar regarding potentially making genres “crunchier.” But others worry a flood of auto-generated music may saturate the industry. However, most current applications have the human firmly at the heart of artistic output, using AI as an expressive tool rather than replacement.
In all, AI brings both promise and questions to the table. But one thing is clear – when responsibly developed and applied, it has strong potential to augment, not replace, human creative expression.
Through collaborative projects, AI empowers new modes of artistic performance mixing man and machine. And as tools, it streamlines media production while allowing non-experts easy participation. Most experts emphasize AI shines not independently but when guiding human ingenuity to new heights.
So while this technology presents uncertainties, its collaborative spirit suggests an optimistic future if steered with care, focus and humanity. The story of AI and music has only just begun.