Menu Menu

Facebook launches deep fake challenge

Partnering up with Microsoft and several universities, the ‘detection challenge’ aims to improve AI’s ability to detect deep fake content.

You’ve probably seen ‘fake news’ all across the internet over the past few years. Coined by Trump during his 2016 election campaign, the term has become commonplace when discussing online content – and dismissing criticism.

Whilst Trump’s use of the phrase has always been partisan (to put it mildly), it can be used more democratically to refer to a general awareness of media bias. Experts continually advise that we should be vigilant when it comes to online news and propaganda, particularly given the concerns over Russian bots and companies such as Cambridge Analytica misusing personal data.

That’s not just limited to news and headlines. The deep fake phenomena has grown considerably in recent years as the tech needed to pull it off becomes cheaper and more readily available. We’ve written about its growth before – and now Facebook is calling on the public to help better understand it.

The initiative is called ‘The Deep Fake Detection Challenge’ and is the combined effort of Facebook, Microsoft, the Partnership on AI coalition, and seven universities. A lot of big brain power is behind this, basically.

Participants in the challenge will be screened and given access to a database of deep fakes, all of which will feature professional, consenting actors, and will need to create software that can accurately distinguish them from normal videos.

Facebook released a blog post last week that announced it was dedicating $10 million to the project, with grants and awards given out to the most effective programmes. If you’re a budding coder looking to throw your hat in the ring, you’ll have to wait a little – according to the official FAQ website, the challenge won’t start until the end of this year.

This ‘challenge’ raises some concerns, particularly given that Facebook was pushed into taking a firmer stance against deep fake content earlier this year. Bill Posters created a convincing video of Mark Zuckerberg sinisterly declaring his control of ‘billions of people’s data’ which went viral. While Facebook’s claim to wish to tackle the problem of deep fakes is admirable – there’s currently no way to detect deepfake methods in videos – we should probably be wary of giving Facebook even more power over this as yet untested technology.

Plus, encouraging the general public to get clued up on how to make, distribute, and detect deep fakes could open a can of worms, and accelerate the proliferation of the problem rather than slow it down. If more deepfake video content enters political discourse on the web, then the ability to tell fact from fiction, and to narrate history as a whole, will be deeply threatened.

We’ll have to wait and see what comes of this challenge, but one positive is that Facebook is at the very least not using its users data. The use of consenting actors for videos is a step in the right direction for the company, and hopefully we’ll see more respect for our data in the future. My embarrassing 2012 pictures should stay with me and me only.

Accessibility