Menu Menu

Can AI create the perfect lie detector?

AI and brain-scanning experts are looking to transform lie detectors from evidentially obsolete tools, into infallible devices capable of separating fact from fiction with perfect accuracy.

How can you tell if someone’s lying? It’s an the-old question we’d all love to know the answer to.

Studies suggest the average person hears up to 200 lies a day, largely made up of what me mumma calls ‘white lies’ or ‘porky pies’… these be mostly general niceties – ‘I’d love to hear about your dream boss’ – intended to grease the wheels of social interactions without flushing decorum down the bog, and we’re all guilty of using them.

Slightly more disturbing are findings that suggest most people tell one or two big lies a day; ‘juicy whoppers’, to either promote themselves, or to protect/avoid hurting others… unless you’re an anarchist that enjoys stirring nonsense before sitting back and watching the world burn. Some lies are more fun than others.

The mystery of course is how we keep getting away with it. The father of psychoanalysis Sigmund Freud claimed in 1905, ‘If his lips are silent, he chatters with his fingertips. Betrayal oozes out of him at every pore’, and while we now know that he was largely a paranoid coke addict, he did have a point.

For us unaccomplished fibbers, the human body is the biggest giveaway there is. Our hearts race, we stall and stutter and involuntary expressions flicker from small muscles in our faces. We either spin yarns with too much colour and detail, or we’re vague, evasive and give too little away. Yet despite all these signals, we’re almost equally hopeless when it comes to spotting deception.

The average success rate of spotting deception – gleaned from over 206 scientific studies – has shown that people manage to separate fibs from fact just 54% of the time, we might as well toss a coin with those numbers, tragic! Portsmouth Uni psychologist Aldert Vrij mostly attributes this to the fact that people’s tell-tale signs are so wide-ranging and different.

Humans have been trying to overcome this problem for millennia. Three thousand years ago the Chinese were forced to chew and spit out rice; the grains were thought to stick in the dry mouths of the guilty. Ridiculous ofc, but the juddering stylus of the polygraph test, truthfully, isn’t much better.

If you’ve watched Mindhunter or Making a Murderer (no, we’re not sponsored by Netflix), you’ll be all too aware that polygraph testing is less than definitive and has been frequently used as a form of coercion throughout history. The reliability of results depends too heavily on the quality of the questions, and the absolute objectivity of the interrogator… essentially the opposite of what Jeremy Kyle was in over 3,000 cases.

The threat of being outed by the machine was enough to sway innocent people into confessions all through the 1960s. It became a form of psychological torture that wrung false admissions from the vulnerable. Pair this with police officials or prosecutors who believed too strongly in the tech’s validity, and you had a recipe for disaster.  Thankfully, polygraph tests are rarely admissible as evidence today. A half-decent lawyer will always argue against it.

It’s not enough for us to just chalk off the polygraph off as a flawed device though, human nature is to constantly innovate. If something’s broke we have an insatiable desire to try and fix it, and we aren’t about to let this idea die.

The go-to resource for any new revolutionary piece of tech these days is AI. The principle of AI is to harness computer science focused on the principle of adaptive learning. In layman’s terms, it refers to computer programs that will consistently adapt their programming as they ‘learn’ more about their chosen task, much like the human brain.

And you can bet programmers are vying to be the first to create the perfect lie detector using the principles of AI.

Today, the US Department of Homeland Security in Canada are testing an AI system known as AVATAR (Automated Virtual Agent) at border crossings. Using interactive videos, travellers answer standard questions about their luggage, potential weapons, produce etc, before being digitally scanned for signs of dishonesty (for 45 seconds), with ‘suspicious’ individuals being sent for additional screening by human agents.

A similar tool named EyeDetect is being trialled at several border crossings across the US and looks to employ the same principles as AVATAR, but through detecting minute changes or ‘micro gestures’ in the subject’s eye.

The primary issue with these technologies is they focus entirely on superficial changes which are subjective. Do you always smile because you’re happy, and frown because you’re wound up? Of course not. Will an eyelash in the eye after a dodgy question provoke a taser-happy patrol officer? We’re Joshing, but you get the point.

Surely the next logical step is to isolate tell-tale changes to brain chemistry – where lies are formulated – Obviously, you can’t pack an MRI scanner into a suitcase and mosey on down to the interrogation room, but it’s clearly the most reliable method for deciphering lies at our disposal today.

Putting practicality aside though, there are serious ethical infringements this technology could bring. A digital dystopia where machines are reading our most personal thoughts is scary enough, and again crucial decisions could be made based on governments’ and corporations’ personal estimations of the technology. What if it just doesn’t work properly?

The creators of these tools continue to argue that weeding out deception is the key to creating a fairer, safer world. But they could inadvertently be putting the innocent at risk with their efforts. This application for AI just seems a little too optimistic, and dangerous.

Accessibility