Menu Menu

Researchers are using AI to predict criminal activity

AI is reportedly capable of predicting where crimes will take place up to a week ahead of time. The tech’s accuracy is around 90%, but there are concerns about its potential to perpetuate biases.

It may sound like something straight from the Bat Cave, but this tech exists for real and may even be widely utilised in the near future.

Scientists have reportedly found a way of predicting when and where criminal activity will take place using sophisticated AI. No, we’re not describing the plot of Minority Report.

Researchers at the University of Chicago trialled the technology in eight major US cities, including Chicago, Los Angeles, and Philadelphia, mapping out digital iterations of urban areas within a 1,000 square-foot radius.

Its machine learning systems were fed historical crime data recorded in the years between 2014 and 2016, impressively managing to pre-empt illegal activity 90% of the time. You can see the study for yourself in the science journal Nature Human Behaviour.

Sufficiently describing the tech, lead professor Ishanu Chattopadhyay stated: ‘We created a digital twin of urban environments. If you feed it data from what happened in the past, it will tell you what’s going to happen in future. It’s not magical, there are limitations, but we validated it and it works really well.’

Following these same principles, AI-based tech is widely in use now across Japan – though not to intercept criminals, but primarily to inform citizens of offender hotspots to avoid at particular times – and for the most part, it’s an effective system.

We have been warned previously, however, that use of AI in law enforcement has the potential to perpetuate harmful biases.

Back in 2016, for instance, the Chicago Police Department became embroiled in controversy for creating a database of people deemed most at risk of being involved in a shooting – either as a victim or assailant. They referred to this as the city’s ‘heat list.’

The contents fed through the AI algorithm were initially kept secret, until the machine egregiously suggested that 56% of Black men within the city (aged 20 to 29) would feature on it.

There’s a solid argument to be made, that because of the way in which crimes are recorded, the whole system was susceptible to bias before even beginning.

Crimes within Chicago’s wealthier areas have been proven to lead to more arrests, for one, and we know that minority groups, certainly within lower socioeconomic levels, are disproportionately stopped and searched.

Chattopadhyay concedes that the newest data will inevitably be slightly biased, but says that efforts have been taken to reduce this. He clarified that his AI doesn’t identify suspects, only potential sites of crime. ‘It’s not Minority Report,’ he reassured.

Speaking with some trepidation on the subject, Cambridge expert Lawrence Sherman warned: ‘It could be reflecting international discrimination by police in certain areas.’ It’s hard to alleviate those worries too.

As long as policing efforts rely on criminal infraction reports state to state, which brings to head the complex relationship between enforcement, misconduct, and wider society, using AI for direct crime prevention appears too murky to fully get on board with.

If, on the other hand, researchers are keen to utilise AI technology to unearth where these biases exist, that would be a far more agreeable prospect.

Accessibility