Menu Menu

Deepfakes: a growing threat to democracy

In 2019, deepfakes evolved from a relatively jovial pastime into a serious ethical threat, slandering celebrities and disrupting politics.

In case you’re unfamiliar with the term ‘deepfake’, it essentially refers to the video manipulation and replication of human features using software centered around the principles of artificial intelligence, face tracking, and deep learning.

In laymen’s terms, it’s the process of mimicking one’s facial features and/or voice to look/sound real.

Networks consisting of interconnected nodes autonomously run calculations on input data. With enough user ‘training’ these nodes begin to arrange themselves to complete a user-specific task. This is deep learning, and it’s primarily being used to marry a certain nature of video with celebrity imagery, convincingly manifesting bogus content on the fly.

While programs of this nature may have started out as a harmless pastimes on the back pages of the internet, they’ve now been catapulted into the mainstream; re-purposed as a defamatory weapon to target prominent celebs and political figures.


So where did it all start?

The term ‘deepfake’ originates from an unnamed Reddit user by the same alias who rose to prominence in 2017 due to a series of fake celebrity pornographic videos. Aside from the obvious can of worms opened by this invasive and disturbing content, it would provide a glimpse into wider ramifications of the technology – still being tackled today.

It marked the first instance an individual was able to quickly and simply create a high-quality and convincing human imitation with readily available means. It certainly wasn’t the first ‘superimposed’ content to breach the internet, but it was the first instance of one being created so rapidly using consumer-grade graphics.

‘deepfake’ housed these celebrity imitations within a single subreddit that he created, and shortly after its conception members began contributing their own edits, mostly involving amusing movie clips. A whole section was dedicated to manifesting ever-memeable actor Nicholas Cage in random movies and TV shows.


The dangerous shift

As many predicted during the initial rise of deepfakes, the technology would quickly go on to incite more ethically dubious practices.

In June 2018 a computer program for deepfake creation, dubbed FakeApp, rose to prominence after being made available for download – it used the same software originally peddled by Reddit’s deepfake.

Shortly after, several videos emerged involving politicians that intended to deceive people into believing fictitious events and statements. Notably, a Flemish socialist party named sp.a posted a video on their Twitter and Facebook account showing Donald Trump appearing to taunt Belgium for remaining in the Paris climate agreement.

While the forgery was sniffed out by most viewers, those unaware of deepfaking believed the video to be real. Following this revelation numerous social platforms, including Twitter, Discord, and Gfycat, explicitly banned deepfakes and all associated communities, with Reddit following suit shortly after… and Reddit rarely says ‘no’ to anything.

In July 2018, an edited interview with liberal U.S. representative Alexandria Ocasio-Cortez went viral, amassing over 4 million views on YouTube. The original interview was doctored to insert a different interviewer, making it appear that Ocasio-Cortez was struggling to answer basic questions. This seemed less like an amusing use of technology, and more like a ploy to smear Ocasio-Cortez’s reputation.

This particular example – while not a conventional deepfake – bookmarked the growth of the tech into other forms of video alteration, making deepfakes all the more difficult to expose.

In May 2019, arguably the most effective digital slander of a government official surfaced on Facebook. An edited video featuring Democratic Speaker Nancy Pelosi, garnered over 2 million views. Her speech was slowed dramatically making it seem slurred, prompting viewers to question Pelosi’s mental state.

Trump’s personal lawyer, Rudy Giuliani, shared the video believing it to be real, further confirming the possible dangers of deepfake software.

To Guiliani’s annoyance, Facebook moderators refused to remove the fake due to policies regarding free expression, instead opting to insert contextual info into the video . Ironically, Facebook’s mastermind Mark Zuckerberg would become the next victim of fakery.

The young billionaire was used to promote an exhibition of deepfakes named Spectre created by artist Bill Poster, with Zuck’s digital caricature praising Spectre, exclaiming that he ‘owes it all [his success]’ to him.

Not surprisingly, Facebook swiftly changed their stance on the filtration process for counterfeit videos, flagging the post, severely reducing its distribution and viewership on their platform.


What’s being done?

This month members of the U.S. House of Representatives officially heard testimony surrounding the recent influx of deepfakes, and examined potential threats to national security posed by AI-altered content.

Tech experts backed by The Pentagon have been deployed specifically to develop means of detecting and flagging deepfakes, but the results have been mixed. The second any inner workings of detection technologies (chiefly facial tracking) are revealed, forgers gain another foothold towards avoiding detection. The very nature of AI is to revise and adapt after all.

The most failproof plan for prevention today is to tackle the legislation and policies of host networks, with federal officials pushing for a country-wide movement to ban digital misinformation, and specifically deepfakes, by 2020.


The divide of opinion

While obvious measures are being taken to prevent deepfakes, a large portion of the online community feel that this crackdown threatens our right to free expression. The technology isn’t harnessed exclusively for nefarious activity, it’s often used for satire, art, critique, and comedy. Still, government officials assert that the potential threats far outweigh their creative application.

The government’s firm stance and lack of room for maneuverability is understandable though, when considering the severe implications deepfakes could have to our sense of democracy.

For the older generation, who are largely oblivious to these technologies, these deepfakes could conceivably influence their decisions in the instatement of government representatives. For instance, a fake video could surface of Joe Biden being derogatory towards refugees, deterring candidates from voting in his favour. Once political narratives are shifted, they’re notoriously difficult to bring back.

What’s more, flagged doctored videos take time to filter through the security systems of most social media platforms, remaining active on people’s timelines. It’s conceivable then that opportunists could push their personal agendas with fake videos on the eve of important elections, and we’d all be blissfully unaware of the fact until after inauguration.

There are implications that reach beyond politics, too. As the technology becomes more advanced, requiring less and less imagery to replicate facial features, deepfakes could become the go-to method of slander. Think about it: we have hundreds of photos across our social media channels.

Whether it’s sending a fake video application to a job on behalf of someone else, a derogatory rant/ hate speech to YouTube, or even a false suicide message, the scope for damaging online content will increase tenfold.

This conundrum is sure to develop as platforms grapple with issues of consent, free expression, and the prevention of spreading misinformation, and we’ll be there to spill the beans on future changes.

What’s your personal stance on deepfakes? Should they be removed entirely, or just regulated more carefully? Let us know in the comments.

Accessibility