Menu Menu

Royals photo commotion highlights challenge of discerning fake content

As artificial intelligence makes discerning between real and fake imagery increasingly tricky, people are undoubtedly becoming less trusting of media they’re presented with. Can we nip this growing scepticism in the bud?

Society’s threshold for what qualifies as ‘uncanny valley’ in 2024 is becoming more debatable by the month. Public trust, meanwhile, is clearly waning.

Thanks to artificial intelligence and its ability to create unerringly realistic portrayals of real life photo and video, people are distinctly more suspicious of media they’re presented with and whether or not it’s entirely authentic.

If the suggestion that algorithms would create mass confusion and hysteria was previously considered ‘storm in a teacup’, the ongoing case surrounding Catherine Middleton points to the contrary.

If you didn’t know already, the Princess of Wales posted a family photo to Instagram on Mother’s Day. Though it appeared innocuous to the naked eye, folk examined the photo forensically and twigged several examples of supposed digital manipulation.

Given frenzied social media speculation had already been raging about Kate’s lack of public appearances in months leading up to the post – with #WhereisKate trending for days – people quickly jumped to some wild conclusions.

Following reports in December 2023 that Kate had successfully undergone abdominal surgery and was in a period of recovery, unfounded conspiracies spread about the royal’s physical and mental wellbeing. Kate’s uncle Gary Goldsmith was even questioned on the subject last week by Ekin Su in the Celebrity Big Brother house.

I personally overheard a memorable discussion where someone claimed Kate had vanished to receive BBL (Brazilian butt lift) surgery. Answers on a postcard for why that would necessitate editing a portion of her daughter’s sleeve in a family photo.

As part of blanket policies to prevent ‘manipulated media’ from circulating, photo agencies including Reuters, the Associated Press, and Getty quickly withdrew the photo from their respective stock libraries. As you’d imagine, this didn’t exactly quell the conspiracies.

In the days since the post, however, Kate has openly clarified that as an amateur photographer, she attempted to touch-up the family portrait using digital software. No use of AI generation tools can be found either, despite the labours of several deepfake experts.

‘Like many amateur photographers, I do occasionally experiment with editing,’ the princess wrote. ‘I wanted to express my apologies for any confusion the family photograph we shared yesterday caused.’ She was publicly spotted that same Monday (March 11).

While a portion of Brits dutifully defend Kate and her right to do what millions of others do on a daily basis, especially on Instagram, the photo has inadvertently displayed just how detrimental manipulated media can be in terms of diminishing public trust.

This is an important point, given 2024 has been touted as the ‘super election year’ with half of the world going to the polls. As AI explodes in its most sophisticated form across photo, video, and audio, the potential for doctored propaganda to proliferate is huge.

Media institutions are reportedly on red alert to minimise the spread through official networks, but the recent slip-up from photo agencies suggests that current vetting safeguards perhaps aren’t adequate.

Nevertheless, it’s hard to lay blame at anyone’s door. The ceaseless volume and speed at which AI-generated content is appearing online is all but impossible to police.

High profile offences, such as the 2022 deepfake which showed Ukraine President Volodymyr Zelensky surrendering on national TV, or the more recent spread of explicit pornographic images using Taylor Swift’s likeness have accelerated efforts for bipartisan reform and safeguards, but we’re not winning the battle yet.

At a London tech conference this week, UK tech secretary Michelle Donelan clarified that attempts to thwart AI deepfakes have ‘no one technological solution that is the answer to everything.’

A notable industry effort to mitigate nefarious uses of AI, dubbed the Coalition for Content Provenance and Authenticity, includes Adobe, the BBC, Microsoft, and Google.

This consortium is developing standards for identifying disinformation, while all Meta platforms have started denoting AI-generated photos and videos. Google is also trialling a tool capable of flagging digitally created or amended images made by its own tools.

While these initiatives will no doubt foil future attempts to deceive or cause harm, it could be argued that the worst damage is already done. By default, people appear notably more reluctant to accept media at face value through fear of being duped by ambiguous technology.

We have plenty of time for the harmless memes currently circulating on Twitter – and oh my, are there some good ones – but the online reaction to the photo initially dropping was genuinely concerning. Mass hysteria is the only way it can be described.

Looking to the near future, as the exploits of AI become harder to distinguish from genuine reality, people are destined to become more on edge and guarded about what they choose to believe. Doesn’t sound like a healthy way to live, right?

Accessibility