Menu Menu
[gtranslate]

How effective is medical AI? It’s complicated

Some healthcare professionals believe artificial intelligence will buy them more time with patients. Others fear it puts jobs, and lives, at risk. 

In a recent episode of The Pitt, a viral TV show about the inner-workings of a Pittsburgh hospital, overworked doctors battled a new antagonist: AI.

An attending physician tells her staff that the technology can cut their time spent on charting results by 80%, buying them more time to provide treatment. But later, doctors discover that the system has made up false details about a patient and confused ‘urology’ for ‘neurology’.

The problems raised in the show mirror real-life experience for medical staff navigating the new age of artificial intelligence. Hospitals world-wide are grappling with questions about the effectiveness of this technology; whether it will benefit the healthcare industry and its patients, or put them in harm’s way.

In a 2025 survey by the American Medical Association, it was reported that two out of three physicians used AI to assist with their daily work. Places with high rates of poverty and homelessness lean on this technology to provide access to those who might otherwise be incapable of seeking medical advice. In other words, AI is pushing the doctor out.

Individuals can ask chat bots to review their symptoms and spit out suggested treatments. And while this frees up more time for staff to see other patients, as well as removing barriers to treatment in a nation where healthcare is infamously inaccessible, it poses serious risks.

For starters, AI is never 100% accurate – at least not yet. Any information it generates should be reviewed by a human, which means it only adds another layer of work to already strained industries like healthcare. Given that there is even the slightest risk of misinformation that could lead to fatal results, AI’s medical use remains a taboo subject. But regardless, people from low income backgrounds should not be used as the guinea pigs.

On the face of it, medical AI sounds like the magic pill we’ve all been waiting for. Better diagnoses, personalised support for a larger number of patients. Faster drug discovery and research into diseases like cancer. But it’s not this simple in practice. And contending with the use of these systems is one of the biggest challenges currently faced by medical professionals – because there’s no concrete way to test them without endangering human lives.

That hasn’t stopped the rollout of AI for ostensibly benign areas of medical work, like data acquisition and documentation. In Guernsey, a smart AI assistant named Heidi was rolled out last year to help doctors take notes during patient visits.

‘Right now, doctors spend around 40% of their time writing and checking notes, letters and reports. While this work is important, it takes them away from seeing patients,’ Dr. Farid Fouladinejad told the BBC.

‘[AI] could help them see more patients and reduce waiting times.’ While professionals like Dr. Fouladinejad aren’t naive to the risks of platforms like Heidi, their integration into the medical field raises questions around the future of wider industries and our understanding of what constitutes ‘important’ work.

When it comes to healthcare, even the most routine tasks like notetaking are a vital part of the process, and while it may be tempting to use AI to lighten the load – particularly in the NHS where doctors faced crippling schedules and budgets – it’s hugely consequential.

That’s not to say human error can be avoided. I’m sure the same conversations were raised when computer technology was first introduced to medical treatment. But what feels most concerning about AI is our willingness to trust it – perhaps because of how easy it makes our own lives.

‘When you’ve thought about any change within medicine, for example, you take the dictaphone, the typewriter, we need to move forwards and this is part of that journey,’ said Dr Michelle Le Cheminant.

But the crucial difference is that these previous forms of technology were being operated by humans. We might have created AI, but it has a mind all of its own.

Enjoyed this? Click here to read more Gen Z focused tech stories.

Accessibility