Key highlights
R.G. Collingwood’s 1938 art vs. craft distinction classifies AI music generation as craft: a predetermined process with a known output, not discovered expression
MIT Media Lab research (N=152) found identical music is rated less emotionally moving when listeners know it was made by AI
Live music revenue hit $23.6B in 2024, projected to reach $40.6B by 2032, while AI-generated tracks account for 28% of Deezer uploads but only 0.5% of streams
AI artists like China Styles (22M streams) and Olivia B. Moore succeed because audiences connect with the human story behind the music. The art isn’t in the audio file. It’s in the artist’s presence around it
A philosopher described AI music 87 years before it existed
Attribution R G Collingwood 1936 by Monozigote CC BY SA 40
R.G. Collingwood published The Principles of Art in 1938. He had no idea he was describing Suno .
His central distinction: craft has a predetermined end and a known plan for achieving it. Art does not. The creator “need not be acting in order to achieve any ulterior end; he not be following a preconceived plan; and he is certainly not transforming anything that can be properly be called a raw material.”
Type a prompt into an AI music generator (“lo-fi hip hop, minor key, sad, 90 BPM”) and the system executes a probability distribution toward a predetermined stylistic output. By Collingwood’s framework , that is craft. Here are 5 reasons the science agrees.
Jump marks:
5 reasons science says AI-generated music is craft, not art
1. Collingwood’s art vs. craft test disqualifies prompted output
Collingwood built his definition around 6 characteristics of craft, including the distinction between planning and execution, the separation of means and ends, and the transformation of raw material into finished product. AI music generation fits every one.
Art, in his framework, is the opposite. The artist is “conscious of a perturbation or excitement which he feels going on within him, but of whose nature he is ignorant.” Expression is how the artist discovers what they feel. No predetermined end. No separation between planning and execution. The process is the product.
Attribution Walter Benjamin c1928 Source Akademie der Künste Berlin
Walter Benjamin made a parallel argument in 1935: mechanical reproduction destroys what he called the “aura” of a work, the unique presence tied to its specific time, place, and creation. A 2025 paper in AI & Society introduced the concept of “semi-aura” for AI art. Live improvisation retains maximum aura. AI-generated output has none.
Attribution John Dewey Photo Underwood Underwood Library of Congress
John Dewey reinforced this in Art as Experience (1934): art requires “doing and undergoing” as a unified whole. Resistance, struggle, discovery. Typing a prompt doesn’t meet that threshold.
2. Effort perception shapes how listeners rate music quality
Credit Photo by XT7 Core on Pexels
Psychologist Justin Kruger and colleagues published the effort heuristic study in 2004. They showed participants identical poems. One group was told the poet spent 18 hours writing it. The other group was told 4 hours.
The 18-hour poem was rated significantly higher in quality, value, and liking. Same words. Same structure. Different backstory. The effect was stronger among non-experts, which describes most music listeners.
Norton, Mochon, and Ariely found a similar pattern in 2011: people paid 63% more for items they assembled themselves. Labor creates psychological ownership.
AI-generated music collapses perceived effort to near zero. Even when producers spend hours iterating on prompts, curating outputs, and shaping results, the label “AI-generated” overrides it. The effort heuristic doesn’t measure actual effort. It measures what your brain believes went in.
3. Identical music scores lower when labeled AI-generated
An MIT Media Lab study (N=152) played participants the same pieces of music under different labeling conditions. Human-composed tracks were rated as significantly more effective at eliciting target emotional states. Participants described them as having “imperfection, flow, and soul.”
White et al. (2025) confirmed the pattern: listeners project more emotional qualities onto music they believe a human made. This holds across genres.
This is calibrated, not irrational. The label carries information about origin. Origin carries information about intent, struggle, and meaning. When you know a human wrote it, your brain factors in the creative process. When you know an algorithm generated it, that signal disappears.
Zachary Wallmark and Marco Iacoboni ran an fMRI study at UCLA measuring brain activity during music listening . They found the mirror neuron system fires when you listen to music, and higher empathy listeners show stronger activation.
When you watch a live performer, your brain physically simulates the cognitive effort of what they’re doing. You feel the difficulty because, neurologically, you’re partially experiencing it.
A 2025 paper in Frontiers in Humanities made the connection explicit: “AI can reproduce artistic forms but cannot generate the life-oriented intentionality or empathic structures inherent in human expression.”
Freestyle rapper Harry Mack articulated this on LinkedIn.
He concedes AI would be technically superior: infinite memory, no filler, never forgets a word. Then he compares it to chess. “Nobody is showing up to watch tournaments where they sit 2 supercomputers across from each other. The excitement is watching real human beings at the best level compete.” The mirror neuron research explains why.
5. $40 billion in live music spending confirms the brain science
Credit Photo by Rahul Pandit on Pexels
Live music revenue hit $23.6B in 2024. Projections put it at $40.6B by 2032. Live Nation reported 159M fans attended shows in 2025, an all-time record. Ticket prices are running 20-30% above 2019 levels.
Deezer’s data tells the other side: 28% of uploads to their platform are AI-generated, but those tracks account for only 0.5% of streams. The market is flooding with AI music. Listeners aren’t choosing it.
The money flows toward irreducible human presence. Paul Delaroche declared “From today, painting is dead” in 1839 when photography arrived. Photography didn’t kill painting. It freed it from realistic representation and triggered Impressionism, Cubism, and abstract art. AI may do the same for music: force it to become more irreducibly, vulnerably human.
The honest counterargument (and where it breaks down)
YouGov (2022) found 58% of listeners would listen to AI music if it sounds good, regardless of origin. Among Gen Z, 72%. A ScienceDirect study showed pop songs labeled “AI-generated” scored higher on positive emotions like happiness and energy than those labeled “human.”
For background music, playlists, and functional audio, the effort heuristic and authorship bias may not matter much. If you’re studying or working out, you probably don’t care who (or what) made the track.
But the science collapses at depth. Fandom, identity formation, emotional catharsis, concert attendance: these depend on the perceived presence of a human on the other end. Nobody gets a tattoo of an AI-generated song.
But Collingwood’s classification of the output doesn’t tell the full story. When Margaret Bynum, the creator behind China Styles , fed her childhood diary entries about growing up in an abusive household into an AI tool, she wasn’t executing a prompt. She was confronting her own pain and discovering what she needed to say through the act of creating. That moment of emotional discovery is exactly what Collingwood called art. The AI was the instrument. The expression was hers.
The same applies to Olivia B. Moore , a 43-year-old mother of five who lost her IT job and turned to Suno to process depression, grief, and survival, creating music she describes as a chronological healing journey. She had no product in mind. She was finding out what she felt by making something. When you read the comment sections on their social media posts, people respond to the human story, not the production method.
By Collingwood’s framework, the generated tracks are still craft. But the moment of discovered expression, choosing what to confront and finding out what you feel through the making, that fits his definition of art. The art lives outside the audio file. It lives in that internal creative moment. These artists are not disproving Collingwood. They are proving that the expression he described can happen with AI as the instrument.
What this means if you make music (with and without AI):
Attribution R G Collingwood 1936 by Monozigote CC BY SA 40
The takeaway: If Collingwood is right that art requires discovered expression, then showing your creative process is how you signal “this is art, not craft” to your audience.
John Dewey argued in 1934 that art is not the object. It is the entire experience, creation, sharing, and audience participation, all of it. The value lives in the process, not the product. The effort heuristic (Kruger et al., 2004) confirms this from the other direction: your audience’s brain uses perceived effort as a shortcut for quality. An MIT Media Lab study of 152 listeners showed identical music scored lower on emotional depth when labeled AI-generated. Same sound, different story behind it, different experience. Studio footage, live performance clips, behind-the-scenes process: skip the label “marketing extras.” They are the product.
The mirror neuron research says your audience’s brains need visible effort to fully engage. Studio footage, live performance clips, behind-the-scenes process: these aren’t marketing extras. They are the product.
The question of whether AI will replace musicians misses the point Collingwood made 87 years ago. AI can produce music. By his framework, that output is craft. But the artists proving him most right are the ones using AI while making their creative process visible. The discovered expression Collingwood described doesn’t have to happen inside a DAW. It can happen in a behind-the-scenes video, a vulnerable social media post, or a story about why you wrote what you wrote. That is where art lives, whether you use AI or not.
Frequently asked questions
Is AI-generated music considered art?
By R.G. Collingwood’s 1938 framework, no. AI music generation follows a predetermined plan (a text prompt) toward a known output, which Collingwood classifies as craft. Art requires the creator to discover what they’re expressing in the act of expressing it. AI systems don’t meet that condition.
Why does AI music feel less emotional to listeners?
MIT Media Lab research found identical music rated less moving when labeled AI-generated. This is the authorship bias effect: listeners use origin information as a signal for intent and meaning. The effort heuristic (Kruger et al., 2004) reinforces it: your brain uses perceived effort as a shortcut for quality, and the label “AI-generated” collapses that perception.
What are mirror neurons and how do they relate to music?
Mirror neurons fire both when you perform an action and when you watch someone else perform it. UCLA fMRI research showed they activate during music listening, especially in high-empathy individuals. When you watch a live performance, your brain simulates the performer’s effort, which is why live music triggers stronger emotional engagement than recorded or AI-generated music.
Is live music growing despite AI?
Yes. Live music revenue hit $23.6B in 2024 with projections of $40.6B by 2032. Live Nation reported 159M fans in 2025. AI-generated music accounts for 28% of Deezer uploads but only 0.5% of streams, suggesting listeners choose human-made music when emotional engagement matters.