An anonymous artist called ‘ghostwriter’ created a song using AI-generated vocals from Drake and The Weeknd. It gained over 250,000 streams overnight before it was pulled from Spotify. Is this the start of an AI takeover in music?
A song made using AI-generated vocals from The Weeknd and Drake was pulled from Spotify last week after amassing over 250,000 streams in a few days.
Posted by an artist called ‘ghostwriter’, the track incorporated a minimal piano-led instrumental over artificial Drake lyrics and a hook by The Weeknd. It amassed over 9 million views on TikTok and received mostly positive attention from listeners.
Users created thousands of response videos praising the lyricism, with some joking about an inevitable lawsuit.
Artificial vocals have become so popular that Drake himself responded to a fake Ice Spice cover several weeks ago, declaring that it was ‘the final straw’. David Guetta played a track on stage that incorporated old-school Eminem vocals generated by AI, describing them as the future of music. We’ve also seen fake Kanye West versions of songs posted on YouTube, such as Justin Bieber’s ‘Love Yourself’ and Cardi B’s ‘WAP’.
The wave of AI-based content that’s flooded our social media feeds in the past year has caused intrigue and concern in equal measure. Programmes like DALL-E and MidJourney now allow for quick, seamless images to be produced in seconds. All you need is a prompt and an internet connection.
AI content is rapidly changing and evolving, leaving the validity of our traditional creative methods in question.
How will artificial images, video, vocals, and written words be integrated alongside authentic, human content? Will this technology leave many of us redundant? Could the likeness of celebrities be abused for the sake of sales and streaming numbers?
As things currently stand, we’ve no way of answering these questions. AI vocals and voice synthesisers are so new that there is no legal framework to protect artists or establish universal rules. The music industry has yet to adapt, leaving artists and musicians at risk of exploitation.
How could AI become a positive tool within in the industry?
We may be in an ambiguous place with our AI tools, but that doesn’t mean they can’t become a useful and well-integrated part of music creation. After all, song writing has adapted and shifted significantly alongside technological improvements over the decades.
Digital production and plugins are as much a part of the music process as any guitar or percussion instrument. There’s no reason why AI can’t be moulded and incorporated in much the same way.
For example, we’re already seeing new machine learning programmes and software help clean up audio and voice recordings.
Adobe Podcast AI can take scratchy, noisy, or low-quality recordings and enhance them to a much higher level. Considering that some of the biggest artists in the world now record vocals on their iPhones – Kanye being a notable example – these quick fixes can save time and energy.
AI technology can also break down economic barriers. Tools like LANDR master music in a quick, cost-effective package that allows amateur creators to get stuck in without spending thousands on extra equipment. Alongside well-established software such as FL Studio, it’s now possible for most people to make their own music to a high standard.
Artificial vocals may be used to patch up errors in a performance, too. If an artist has a single vocal recording they want to use in a track, AI can be used to clean up, pitch correct, or swap out single sounds or phrases.
There are some occasions where this could prove extremely useful in the future, as artists alter songs after release due to controversy. Lizzo changed the lyrics to her song ‘Grrrls’ when fans pointed out an offensive ableist slur last year, for example. AI could make this process very quick.