Partnering up with Microsoft and several universities, the ‘detection challenge’ aims to improve AI’s ability to detect deep fake content.
You’ve probably seen ‘fake news’ all across the internet over the past few years. Coined by Trump during his 2016 election campaign, the term has become commonplace when discussing online content – and dismissing criticism.
Whilst Trump’s use of the phrase has always been partisan (to put it mildly), it can be used more democratically to refer to a general awareness of media bias. Experts continually advise that we should be vigilant when it comes to online news and propaganda, particularly given the concerns over Russian bots and companies such as Cambridge Analytica misusing personal data.
That’s not just limited to news and headlines. The deep fake phenomena has grown considerably in recent years as the tech needed to pull it off becomes cheaper and more readily available. We’ve written about its growth before – and now Facebook is calling on the public to help better understand it.
The initiative is called ‘The Deep Fake Detection Challenge’ and is the combined effort of Facebook, Microsoft, the Partnership on AI coalition, and seven universities. A lot of big brain power is behind this, basically.
Participants in the challenge will be screened and given access to a database of deep fakes, all of which will feature professional, consenting actors, and will need to create software that can accurately distinguish them from normal videos.
Facebook released a blog post last week that announced it was dedicating $10 million to the project, with grants and awards given out to the most effective programmes. If you’re a budding coder looking to throw your hat in the ring, you’ll have to wait a little – according to the official FAQ website, the challenge won’t start until the end of this year.