How is Facebook tackling deepfakes?
Following the aftermath of two high profile cases on its platform – the smearing of Democratic speaker Nancy Pelosi and Mr Zuckerberg himself – Facebook called on users in late 2019 to help bolster its deep fake detection efforts via a public ‘challenge’.
Those who took part were given 100,000 short clips containing footage from 3000 actors hired by Facebook. From this pool, participants were tasked with identifying which samples had been tampered with by Facebook using their own custom made automatic detection algorithms.
Almost a year on we finally have the results from these tests on record. 2000 participants put their algorithms to the test (applicants were mostly from the tech industry or had academia backgrounds), and the most successful sample secured an impressive detection rate of 82%. Considering the multitude of digital exploits that go into deep fakes today; blurs, frame-rate modifications, overlays, just to name a few… you’d have to say that this is a really promising return.
The winner of the challenge, Selim Seferbekov, was awarded $500,000 and was then tasked with pitting his winning algorithm against a ‘black box’ dataset filled with the most intricate forms of deep fake deception known to AI experts. Once the score was finalised and combined with the previous results, an overall average came out at just 65%.
Should these results be a concern for Facebook?
This study highlights just how difficult it truly is to spot a deepfake in the wild. After all, the adaptive nature of AI means that as soon as existing exploits are closed off, new ones may crop up quickly and it’s back to square one. It’s a frustrating, never-ending game of cat and mouse.
However, Facebook researchers are confident that the data collected from this test will prove invaluable in reinforcing their own preventative software that’s currently in the works. Facebook is also planning to release the source code for the most successful algorithms – including Seferbekov’s – to assist other researchers in detecting phoney videos before they go viral.
For now, Facebook is playing it smart and is keeping its own detection software secret to prevent any form of reverse engineering from cropping up in 2020. Chief technology officer Mike Schroepfer states that as of right now deepfakes are ‘not a big issue’ with Facebook, but that he and his team are working tirelessly to avoid being ‘caught flat footed’ in the build-up to the US election this November.
As it stands, it remains to be seen whether we’re really equipped to deal with the next wave of deep fakes. But you can guarantee one thing, it is coming.