Menu Menu

Could satellite deepfakes become a national security threat?

Geographers are becoming increasingly concerned about the spread of AI-generated satellite imagery and its potential to create hoax content or skew visual reports. Some even suggest it’s a threat to national security.

When discussing deepfakes, most of us think of Nicholas Cage memes, unresolved issues of non-consensual pornography, or doctored political propaganda.

What doesn’t immediately jump out is the potential for AI-generated imagery to manipulate satellite photos of cityscapes and countryside.

This process, being described by deepfake experts as ‘location spoofing,’ is exactly what geographers and intelligence agencies are concerned will become commonplace in the near future.

Much like the deep learning tech able to accurately reconstruct people’s faces from taking existing media on the internet, location spoofing uses the countless satellite images available on platforms like Google Earth as stimulus to either create mock-up cityscapes entirely or slightly alter them.

Given the often low quality renderings that exist on these client map services (and satellite images in general) detecting any type of forgery is particularly difficult for those with untrained eyes. For obvious reasons, this throws up a bunch of red flags.

Imagine, for instance, that hoax images of natural disasters like floods or wildfires were to start circulating. The speed at which stories spread on social media could spark panic from citizens and potentially divert the attention of emergency services.

Satellite imagery has long been used to uncover evidence of human rights injustices too – a prime example being the harrowing reports of China’s Uyghur detention camps.

If satellite deepfakes were commonplace at the time, the Chinese government may have proclaimed the images as fake or perhaps created their own forgeries to deceive investigators.

The sophistication of this technology is not to be underestimated, and is fully on display in a recent gaming mod video gaining traction online. Originally posted to YouTube by Intel ISL, in-game footage shows a photorealistic modification of Rockstar’s Grand Theft Auto V which renders the fictional city of Los Santos in hyper-realistic textures.

Using the same variety of deep learning technology we’ve previously discussed, the mod (which isn’t available to play, unfortunately) uses rendering buffers to gain geometric information such as ‘surface normals,’ lighting, and reflectivity to produce what looks like a real life car driving through an urban area.

Check the video out below, it’s super impressive but also a little eerie when you consider how this AI could be harnessed down the line.

Though this is likely the first you’re hearing about satellite deepfakes (and fair enough), the US military warned about its potential to spread misinformation and threaten national security back in 2019.

Todd Myers, an analyst at the National Geospatial-Intelligence Agency, imagined a scenario in which military strategy software is fooled by fake data. ‘From a tactical perspective or mission planning, you train your forces to go a certain route, toward a bridge, but it’s not there. Then there’s a big surprise [trap] waiting for you.’

Despite the troubling thought of how this tech could be misused – and there are myriad ways – those who spend their time studying the subject, like assistant geography professor at the University of Washington, Bo Zhao, suggest that public awareness is key to tackling any future problems.

If satellite imagery deepfakes do become a pertinent issue, it’s important that everyone understand that such technology is out there.

In a recent paper published by Zhao, he highlights that the more fake content is generated, the more forgeries will be detected by those with the professional nous – thereby increasing general awareness among the public.

Crucially, as Zhao and his colleagues write: ‘If we continue being unaware or unprepared for deepfake, we run the risk on entering a fake geography dystopia.’

Accessibility