Menu Menu
[gtranslate]

Are deepfakes convincing enough to create false memories?

A new study has found that watching deepfake videos and reading short text descriptions of made-up remakes can cause people to falsely remember watching non-existent films.

Last month, researchers at University College Cork in Ireland published findings from their research into false memories, a study which indicates that the impacts of generative AI programmes may be more complicated than initially feared.

Deepfake tech has already proven itself a dangerously effective means of spreading misinformation, but according to the report, deepfake videos can alter memories of the past, as well as people’s perception of events.

To test their theory, the researchers asked nearly 440 people to watch deepfaked clips from falsified remakes of films including Brad Pitt and Angelina Jolie in The Shining, Chris Pratt as Indiana Jones, and Will Smith in The Matrix.

They did not immediately tell participants that the films weren’t authentic, in order to better understand the impact of deepfakes on a person’s memory and added real film remakes to the mix for comparative purposes.

After watching four real films and two fake films in a random order, participants were asked if they’d seen or heard of the deepfaked versions before.

Any of those who claimed to have previously seen the entire film, a trailer for it, or even if they agreed that they’d simply heard of it, were categorised as having a false memory.

As the paper states, 75 per cent of participants who saw a deepfake video of Charlize Theron starring in Captain Marvel falsely remembered its existence.

40 per cent of viewers falsely remembered the other three film remakes of The ShiningThe Matrix, and Indiana Jones.

Interestingly, some went as far as to rank the movies that were never actually produced as being better than the originals – underscoring the alarming power of deepfake tech at distorting recollections of reality.

There is an important caveat, however, because as disconcerting as the results might be, using deepfakes to misrepresent the past did not appear to be any more effective than reading text descriptions of imaginary movies.

As lead author Gillian Murphy explains, the findings consequently do not indicate a ‘uniquely powerful threat’ posed by deepfakes compared to existing forms of misinformation.

‘The current evidence suggests they’re not uniquely powerful in terms of distorting memory, but they’re just as effective as the written word, which is a powerful vehicle for misinformation as we know,’ she told The Daily Beast.

‘So our study doesn’t claim that that deepfakes can’t distort our memories, just that they’re not any more effective than existing methods.’

Additionally, a key component of the potential success of deepfakes is what’s known as ‘motivated reasoning’ – the tendency for people to unintentionally allow preconceived notions and biases to manipulate their memory.

In other words, while humans might not necessarily be swayed by deepfakes or fake news, they might be more inclined to seek out articles and opinions that affirm their worldview.

Taking evidence at face value without much scrutiny this way is what’s behind the ‘Mandela Effect,’ where large groups of people collectively share a false memory.

This is what Murphy deems to be at play.

‘Our memories don’t work like video cameras – they have not evolved to perfectly preserve memories of exactly what happened,’ she finishes.

‘Instead, our memories are what we call ‘reconstructive’, where every time we recall something we build the memory in our minds. In this building process, we sometimes make errors by forgetting a piece or the event or adding in something that wasn’t there originally.’

‘While this can mean our memories are sometimes inaccurate, it often serves us very well as we can update our memories to reflect things we have learned.’

Accessibility