Menu Menu

Why the Facemega deepfake app is a slippery slope

Accessing deepfake technology is getting easier. With the majority of people choosing to use it for sinister reasons, it’s hard to ignore that the negative consequences of the novel tech might outweigh its benefits.

The negative and dangerous consequences of deepfakes have been heatedly discussed since AI technology made its way into the mainstream in 2017.

Experts have warned that their widespread use, combined with constantly improving AI learning systems, will ‘wreak havoc on society’ and that we are ‘not prepared’ to deal with the fallout.

Besides the freakishly real clips posted to the @deepTomCruise TikTok account, most deepfakes don’t take long to exhibit subtle glitches that allow us to detect them as fabricated content.

Creating a flawless video of someone saying something they didn’t actually say or performing acts they didn’t do necessitates phenomenal editing skills, for one. It also requires many multi-media reference points for AI to learn the individual’s mannerisms and speech habits well enough to be convincing.

But of course, it was only a matter of time before such technology advanced – and before it was adapted to work in the hands of everyday people.

In recent days, users of photo editing apps and Facebook have reported seeing ads for an app called Facemega. The free smartphone software allows users to upload and stitch a portrait of virtually anybody’s face onto any video of their choice.

Looking at the video of a deep fake’d Emma Watson above, you can see how realistic the technology is.

Seriously, I went to the critically-acclaimed ABBA holographic concert in London recently and the lifelike movements in that clip are strikingly similar in quality.

But it’s hard to miss the suggestive nature of the video, which perfectly illustrates the dark intentions of users – and perhaps the programmers themselves. Making things more problematic is that this clip is used as advertising.

Retweeting the screen recording of the ad, one Twitter user said: ‘This is just a new way to sexually harass women. It’s taking away your bodily autonomy and forcing you into non-consensual sexual material, which can then be used to humiliate and discriminate against you. Not to mention the psychological effects.’

Encouraging the use of deepfake AI for this purpose is obviously sinister. However, it shouldn’t be surprising, as the most famous uses of deep fake technology so far have either been pornographic or tied to politics.


In 2019 – just two years after the technology was created – it was reported that at least 96 percent of all deepfake videos found online were pornographic, according to research conducted by Deeptrace.

Fuelling the creation of these videos are numerous websites that exclusively host deepfake pornography. Many falsified clips uploaded to these platforms have been created by vengeful exes or people looking to tarnish the image of their enemies.

As a result, governments around the world have started to build stronger legal frameworks that limit the ways deepfake technology can be used. Admittedly, these had to be informed by existing cases, which is why tech experts warned we were unprepared for its dangers.

Back in November 2022, the UK government made deepfake pornography a jailable criminal offence. However, courts can only prosecute individuals in the event that they circulate these images or videos online.

Some US states have also put similar laws in place. Realistically, though, these policies cannot protect anyone from being inserted into graphic videos for another individual’s own viewing behind closed doors.

This thought, for the vast majority of women, will be enough to incite disgust and anger, if not a shudder and skin-crawling reaction.


Face swap challenge with #facemega #facemegaapp #faceswap #facemagic #videoediting #funny #acting

♬ Happy Upbeat Fun Rock Energetic – StockSounds

As mentioned, deepfakes have also been used to incite political uproar.

Barak Obama has been depicted insulting Donald Trump. Mark Zuckerberg has been poised to say that Facebook’s sole purpose is to control and exploit users of the platform.

These all went viral, revealing the level of interest, awe, and in some cases, the trickery of deepfake technology.

Used in other contexts, the consequences could be far more dangerous. During the initial months of Russia’s invasion of Ukraine, videos of President Zelensky emerged in which he told Ukrainian soldiers to surrender to Russian soldiers.

Luckily, the real President Zelensky took to his personal social media accounts to confirm that the video had been doctored by opposition forces. He reminded his people that the fight for their country was not yet over and urged them to remain united.

It seems as if deepfakes, even when used to make a chuckle-worthy version of Queen Elizabeth II’s traditional Christmas speech, tend to arise an unsettling feeling within us.

Perhaps we know deep down that the technology is far more likely to be used with malicious intent. So, what will happen when it becomes so advanced that we can’t tell the difference between what is fact and fiction?

With free-to-download apps like Facemega, should we be worried that deepfake technology is becoming widely available through our smartphones?

I’d say that, unfortunately, we should be.