AI’s place in the music industry is becoming a copyright minefield.
AI can create music, but does this make AI an artist? Do AI algorithms create their own work, or are the humans behind these algorithms the real ‘artists’? If an algorithm is fed music by a certain artist, does it then owe that artist royalties, or can it be considered an independent creative body?
These are some of the questions the involvement of AI in music production is beginning to pose to the industry, and legal professionals worldwide.
AI algorithms have been a central part of music production for many years; commonly used to help musicians with composition, performance, theory, and digital sound processing. Increasingly, however, several software programs have been developed that use AI to produce music.
A good example of this is the app Endel, which used AI to generate reactive, personalised soundscapes for ‘functional’ rather than creative use (aka to be used as background music in a video or computer game). The Endel team created around 600 usable tracks before signing a distribution deal with Warner Music. As part of the contract, however, Warner required info on who wrote each track in order to register the copyrights.
An awkward silence ensued. The trouble was, it was the AI that generated the audio. In the end, the team decided to list the six employees at Endel as the songwriters for all 600 tracks. And so, six software engineers suddenly gained song writing credits that they have a 95% chance of always bringing up at parties.
But could the AI have been credited also?
Technically there’s nothing stopping this from happening. In US copyright law the word ‘human’ doesn’t appear once. The existing terminology clearly assumes that all copyright law will apply to humans, but there isn’t much existing litigation about the word’s absence. This is starting to cause some issues.
AI capable of generating music operates through learning algorithms. In order to create music from scratch, it must be fed examples. Like the engineers at Endel, generally you would feed an AI multiple examples of genre, tonality, style, and mood in order to create a good little versatile robot artist. However, say you were to feed an AI only samples of Adele’s songs. It would, of course, produce music based solely on her voice and style. In this case, is Adele owed royalties for a robot that exclusively emulates her? It’s looking like the answer might be ‘no’.
‘There’s nothing legally requiring you to give her any profits from it unless you’re directly sampling’, Meredith Rose, policy counsel at Public Knowledge, told The Verge. There’s room for debate on whether this is good for musicians. ‘If an algorithm is only fed Beyoncé songs’, Rose continues, ‘and the output is a piece of music, it clearly couldn’t have added anything to this, and there’s nothing original there.’
As the law is generally reluctant to copyright protect anything created ‘in the style of’ (artists are influenced by their peers and predecessors constantly), the AI program would have to create a song that sounds specifically like an already existing piece.
Only then can copyright infringement be claimed by an artist. But it’s likely that only the most successful artists would have the means to fight this kind of complicated battle in court.
What’s more, it’s not clear whether AI can legally be trained on publicly purchasable material in the first place. When you download a song on your Spotify subscription, are you also downloading the right to use its audio as AI training data? And, even if an AI system did seem to mimic a certain artist’s sound enough to stimulate a copyright case, how do you prove that the AI was designed to copy this artists music?
With copyright law, the infringing author must be proved to have been reasonably exposed to and influenced by the work they’re accused of ripping off. But how do you prove that a machine has been ‘influenced’?