Menu Menu

Could AI become a music artist?

AI’s place in the music industry is becoming a copyright minefield.

AI can create music, but does this make AI an artist? Do AI algorithms create their own work, or are the humans behind these algorithms the real ‘artists’? If an algorithm is fed music by a certain artist, does it then owe that artist royalties, or can it be considered an independent creative body?

These are some of the questions the involvement of AI in music production is beginning to pose to the industry, and legal professionals worldwide.

AI algorithms have been a central part of music production for many years; commonly used to help musicians with composition, performance, theory, and digital sound processing. Increasingly, however, several software programs have been developed that use AI to produce music.

A good example of this is the app Endel, which used AI to generate reactive, personalised soundscapes for ‘functional’ rather than creative use (aka to be used as background music in a video or computer game). The Endel team created around 600 usable tracks before signing a distribution deal with Warner Music. As part of the contract, however, Warner required info on who wrote each track in order to register the copyrights.

An awkward silence ensued. The trouble was, it was the AI that generated the audio. In the end, the team decided to list the six employees at Endel as the songwriters for all 600 tracks. And so, six software engineers suddenly gained song writing credits that they have a 95% chance of always bringing up at parties.

But could the AI have been credited also?

Technically there’s nothing stopping this from happening. In US copyright law the word ‘human’ doesn’t appear once. The existing terminology clearly assumes that all copyright law will apply to humans, but there isn’t much existing litigation about the word’s absence. This is starting to cause some issues.

AI capable of generating music operates through learning algorithms. In order to create music from scratch, it must be fed examples. Like the engineers at Endel, generally you would feed an AI multiple examples of genre, tonality, style, and mood in order to create a good little versatile robot artist. However, say you were to feed an AI only samples of Adele’s songs. It would, of course, produce music based solely on her voice and style. In this case, is Adele owed royalties for a robot that exclusively emulates her? It’s looking like the answer might be ‘no’.

‘There’s nothing legally requiring you to give her any profits from it unless you’re directly sampling’, Meredith Rose, policy counsel at Public Knowledge, told The Verge. There’s room for debate on whether this is good for musicians. ‘If an algorithm is only fed Beyoncé songs’, Rose continues, ‘and the output is a piece of music, it clearly couldn’t have added anything to this, and there’s nothing original there.’

As the law is generally reluctant to copyright protect anything created ‘in the style of’ (artists are influenced by their peers and predecessors constantly), the AI program would have to create a song that sounds specifically like an already existing piece.

Only then can copyright infringement be claimed by an artist. But it’s likely that only the most successful artists would have the means to fight this kind of complicated battle in court.

What’s more, it’s not clear whether AI can legally be trained on publicly purchasable material in the first place. When you download a song on your Spotify subscription, are you also downloading the right to use its audio as AI training data? And, even if an AI system did seem to mimic a certain artist’s sound enough to stimulate a copyright case, how do you prove that the AI was designed to copy this artists music?

With copyright law, the infringing author must be proved to have been reasonably exposed to and influenced by the work they’re accused of ripping off. But how do you prove that a machine has been ‘influenced’?

If this article is just seeming like a whole heap of rhetorical questions to you, then we don’t blame you. Authorship has always been a contentious issue, ever since early modern writers decided that they were not, in fact, contributing to an ‘authorial tradition’ (as the Anglo Saxons would’ve had it) but producing original works that they should be credited for. In fact, arguments about whether code can be the author of a musical work are over 50 years old, with the Copyright Office of the US bringing up the concern in its 1965 annual report.

Despite this early red flag, current copyright law is still vague when discussing the authorship of works that weren’t created by humans. An amusing example of the kind of Kafkaesque case this uncertainty produces is a US ruling on a monkey taking a selfie.

The case centred on a crested macaque that picked up a remote trigger for a photographer’s camera and took a photo of itself. The resulting case was over which creator should own the copyright to the photo, the monkey or the photographer (which was presumably very important?). Ultimately, the US Court of Appeals for the Ninth Circuit (the level before the Supreme Court) decided that a monkey could not own copyright.

The court made two points: the copyright law’s inclusion of terms like ‘spouse’ and ‘children’ implies that an author must be human, and that whilst corporations have previously been allowed to utilise copyright law, corporations are run by humans, creating a loophole that excludes all non-humans.

So, if a monkey isn’t allowed to own copyright, then wouldn’t the authorship of a song created by AI go to the humans that created the program? Another spanner in the works, however, is that the Compendium of US Copyright Office Practices actually does have a whole sub-section called ‘The Human Authorship Requirement’, which explicitly states that non-humans such as plants or supernatural beings cannot be authors.

The question truly seems to rest on how we develop and define artificial intelligence going forward. Whilst the distinction between organic and artificial intelligence may seem clear now, as sci fi has been warning us for years this may not always be the case. We may, with technological advancements, enter a world where AI can use extant music to create completely unprecedented compositions. We could even enter a world where AI’s start suing one another for copyright. But that might be getting a bit ahead of ourselves.

For now lawyers will have to continue navigating the complicated issue of AI copyright, and artists will have to be on the lookout for ATMs singing their songs…

Accessibility