Staying one step ahead of terrorist outfits is proving increasingly difficult with the rapid proliferation of open source platforms and artificial intelligence.
Monday marked the 20th anniversary of the 7/7 terror attacks in London, and with the grim events of that week once again at the forefront of the public’s minds, counter-terrorism groups are working to stay ahead of nefarious groups that may try to cause harm in the future.
According to counter-terrorism authorities from the UK’s transatlantic cousins, however, that MO is becoming harder to achieve. In the US, multiple security agencies are reportedly concerned about the the rapid spread of AI and how it’s already helping hostile outfits plan and organise.
Terrorists utilising the latest technologies isn’t a new problem, either. In recent years we’ve seen them employ everything from using crypto for covert transactions, to spreading blueprints for 3D printed weapons, and authorities have to perpetually anticipate and adapt. When it comes to AI, this is easier said than done.
‘Our research predicted exactly what we’re observing: terrorists deploying AI to accelerate existing activities rather than revolutionise their operational capabilities,’ said Adam Hadley, founder and executive director of the UN backed watchdog Tech Against Terrorism.
Intelligence shows that Islamic State terrorists have been using ChatGPT to amplify their radicalisation propaganda in a myriad of ways, including transforming text bulletins into audio files for would-be recruits and even creating AI-generated imagery.
Europol and SITE Intelligence Group have flagged several developments from encrypted Jihadist chats since May 2024, most notably including guides for voice-cloning – presumably to spread inflammatory misinformation – phising methods for hacks, and sabotaging autonomous drones and vehicles. It’s all at relatively nascent, but the appetite to exploit the technology is massive and creating genuine concern.