Menu Menu

Deep fake AI could reshape the music industry

Tech engineers are now able to simulate the vocals of musicians and artists to re-imagine old songs, which could have long-term ramifications for the industry as a whole.

Older Gen Z music fans may remember hearing Eminem’s breakout single ‘My Name Is’ all the way back in 1999.

Released as the first single off his debut studio album, the song was a smash pop culture hit thanks to its wordplay, off-the-wall character, and relentless references to popular celebrities of the time. While this approach gave Eminem a perfect introductory gateway to mainstream audiences, it also meant the song became outdated very quickly.

I’m twenty-five – practically a fossil at this point – and even I didn’t fully understand every joke and punchline when I first heard it. Considering ‘My Name Is’ is now over two decades old, it’s fun to consider just how drastically different the lyrics would be if it came out today.

Thanks to new AI tech and vocal synthesizers this is no longer just a fictional musing. A YouTube account called ‘40 Hertz’ uploaded a reworked version of the song that uses ‘deep fake’ audio to recreate the iconic Slim Shady sound with updated lyrics. The result is mind bending, concerning, and impressive all at once. Listen below.

Re-creating old songs for the modern moment is incredible on its own, but the use of AI in this way may have wider reaching ramifications that could blur the line between automated art and authentic human expression.


What is deep faking?

We’ve written a ton about deep fake before, largely focusing on videos and photo identification. ‘Deep faking’ is when an artist or algorithm creates a piece of media that adopts the likeness of somebody without their direct involvement.

You can make an individual say or do things on video or in a song that they never agreed to, which has terrifying potential to mislead and confuse.

We’re already living through an age of mistrust and deep-rooted division caused by misinformation – adding inauthentic video that looks real to all that jargon will likely cause further headaches and disruption to the status quo.

AI generated music is less covered, however, at least from mainstream outlets. It could have just as significant an impact as video and may be even harder to detect for everyday radio listeners.

Most of the content produced with audio AI is meme centric and a bit silly at present – but as improvements are constantly made, the shift into serious music isn’t all that far off.


How does deep fake music work?

‘Deep fake’ music is simply the use of synthesizers to reproduce the voice of an artist, effectively making their vocals an instrument that can be reworked to the engineer’s choosing. You can create new lyrics and songs without the original vocalist’s knowledge or input.

The recreated ‘My Name Is’ as mentioned above is a perfect case study, and a similar bot was used to create a Zuckerberg diss track using Eminem’s voice at the end of last year. The YouTube channel ‘30 Hertz’ has a ton of fake AI tracks by hip-hop artists that are worth checking out if you want an extensive look at the possibilities of music deep fakes.

AI can also be used to produce original scores and mix up pre-existing songs. There are a few free gimmicky bots available online that can manipulate tracks and demonstrate how much potential there is AI generated music beyond just recreating the sounds of vocalists.

‘The Eternal Jukebox’, for example, rearranges any song of your choosing to be played endlessly without interruption. I tested it out with Machine Gun Kelly’s pop punk banger ‘drunk face’ and the result was surprisingly seamless.

These bots and algorithms can be upscaled to serve as more than just one-off tools and quirky sites, too. Marketing firms that take advantage of music-based AI promise clients original, automated tracks that can be tweaked for any mood, situation, or project, without the need for a songwriter.

AI.Music is one such firm. It describes its tools as being able to ‘infinitely scale and adapt human creativity’ which is both intriguing and dystopian, depending on who you ask.

We could soon be living in a world populated by both human and robot produced music, with the average listener unable to tell the difference. Human vocals will be sampled, reworked, and synthesized to produce new material by long-dead artists, and we may likely be able to use algorithms to create DJ sets and mixes on the fly without any need for manual intervention.


How could this change the industry as a whole?

It’s easy to immediately think of a bleak future where our Spotify feeds are full of automated songs that were never even touched by human hands, but experts and industry innovators think this is unlikely.

Speaking to TIME, musician Ash Koosha likens the use of AI to the first ever synthesizers in the seventies. New changes in music recording and production will always be met with reservations, and the most reasonable outcome of AI-based music is rereleases, remasters, and reworkings of old songs by classic artists.

As long as its implementation is smart and well handled, there is no reason that AI and bots can’t be used as another tool in larger box of resources. There is the possibility of dodgy practices and misleading media, but early regulation and control over access to advanced AI could help stamp this out before it becomes a big issue.

For the time being we’re likely to continue seeing old-school Eminem songs, Billie Eilish clips, and playful online bots. Let’s hope it stays on the side of fun and not political manipulation.

Accessibility