Menu Menu

Should we put restrictions on AI in music?

An anonymous artist called ‘ghostwriter’ created a song using AI-generated vocals from Drake and The Weeknd. It gained over 250,000 streams overnight before it was pulled from Spotify. Is this the start of an AI takeover in music?

A song made using AI-generated vocals from The Weeknd and Drake was pulled from Spotify last week after amassing over 250,000 streams in a few days.

Posted by an artist called ‘ghostwriter’, the track incorporated a minimal piano-led instrumental over artificial Drake lyrics and a hook by The Weeknd. It amassed over 9 million views on TikTok and received mostly positive attention from listeners.

Users created thousands of response videos praising the lyricism, with some joking about an inevitable lawsuit.

Artificial vocals have become so popular that Drake himself responded to a fake Ice Spice cover several weeks ago, declaring that it was ‘the final straw’. David Guetta played a track on stage that incorporated old-school Eminem vocals generated by AI, describing them as the future of music. We’ve also seen fake Kanye West versions of songs posted on YouTube, such as Justin Bieber’s ‘Love Yourself’ and Cardi B’s ‘WAP’.

The wave of AI-based content that’s flooded our social media feeds in the past year has caused intrigue and concern in equal measure. Programmes like DALL-E and MidJourney now allow for quick, seamless images to be produced in seconds. All you need is a prompt and an internet connection.

AI content is rapidly changing and evolving, leaving the validity of our traditional creative methods in question.

How will artificial images, video, vocals, and written words be integrated alongside authentic, human content? Will this technology leave many of us redundant? Could the likeness of celebrities be abused for the sake of sales and streaming numbers?

As things currently stand, we’ve no way of answering these questions. AI vocals and voice synthesisers are so new that there is no legal framework to protect artists or establish universal rules. The music industry has yet to adapt, leaving artists and musicians at risk of exploitation.


How could AI become a positive tool within in the industry?

We may be in an ambiguous place with our AI tools, but that doesn’t mean they can’t become a useful and well-integrated part of music creation. After all, song writing has adapted and shifted significantly alongside technological improvements over the decades.

Digital production and plugins are as much a part of the music process as any guitar or percussion instrument. There’s no reason why AI can’t be moulded and incorporated in much the same way.

For example, we’re already seeing new machine learning programmes and software help clean up audio and voice recordings.

Adobe Podcast AI can take scratchy, noisy, or low-quality recordings and enhance them to a much higher level. Considering that some of the biggest artists in the world now record vocals on their iPhones – Kanye being a notable example – these quick fixes can save time and energy.

AI technology can also break down economic barriers. Tools like LANDR master music in a quick, cost-effective package that allows amateur creators to get stuck in without spending thousands on extra equipment. Alongside well-established software such as FL Studio, it’s now possible for most people to make their own music to a high standard.

Artificial vocals may be used to patch up errors in a performance, too. If an artist has a single vocal recording they want to use in a track, AI can be used to clean up, pitch correct, or swap out single sounds or phrases.

There are some occasions where this could prove extremely useful in the future, as artists alter songs after release due to controversy. Lizzo changed the lyrics to her song ‘Grrrls’ when fans pointed out an offensive ableist slur last year, for example. AI could make this process very quick.


Why could AI tools become a moral and legal issue?

AI is still most controversial when creating an artificial likeness of real musicians, especially rappers and hip-hop artists.

As there are currently no real copyright laws around using AI to replicate the vocals of other people, it’s hard to know where to draw a line for legal use. If a producer writes and creates an entire track but uses artificial vocals, how much of the song do they own? Is it an infringement of the original voice references used to create the lyrics? Things can easily get very complicated.

The fake Drake and The Weeknd song made it onto Spotify quite easily and generated a significant number of streams before it was taken down. As more people become familiar with AI tools, we’ll likely see more and more of these songs be uploaded to streaming platforms.

Keep in mind too that Spotify is likely to be the quickest to take action on any artificial song.

Other sites, like SoundCloud and TikTok, may be far more loose with what they allow. It will become difficult to discern what is a genuine song by an artist and what is a fake unless widespread rules are put in place. This threatens the livelihoods of many artists, and could damage well-established branding for individual acts.

Some have compared the AI takeover to the days of Napster and LimeWire, where emerging technologies quickly disrupted and permanently altered the status quo for the music industry.

Mainstream platforms like Spotify and major record labels will need to be savvy to the impending risks of AI and draft up legislation quickly if they’re to remain ahead of the curve. It’ll take more than a few empty threats from Drake to slow down a rapidly changing landscape.

Accessibility