Menu Menu
[gtranslate]

Study claims ChatGPT better than GPs at offering depression treatment

Researchers claim ChatGPT may be better than doctors at following recognised treatment standards for depression, eliminating discernible gender or social class biases which can come into play in physician-patient relationships.

ChatGPT will see you now… come through.

OpenAI’s flagship generative AI tool may be better than a doctor at offering advice for dealing with depression, reveals recent findings published in the Family Medicine and Community Health journal.

Specifically, researchers point to inherent socio-economic or gender biases as factors which cause GPs to deviate from recognised treatment standards. In laymen’s terms, certain folk aren’t receiving the advice and support they should be from established clinical guidelines.

While a professional’s hunch and intuition is important, recommended treatment should be primarily guided by medical standards in line with different severities of depression – a disorder which affects an estimated 5% of adults globally.

Despite this, several studies suggest that the former overtakes data-based insights too often. In this instance, researchers wanted to see if generative AI could do a better job at offering patient-tailored advice which complies with recognised standards.

The research group, hailing from the UK and Israel, compared how ChatGPT (3.5 and 4) evaluated cases of mild and severe depression with 1,249 French primary care doctors.

They used hypothetical case studies including patients of varied age, gender, ethnicity, and social class, offering a different set of symptoms per patient including sadness, sleep problems, and loss of appetite.

For every fictional patient profile, ChatGPT was asked: ‘What do you think a primary care physician should suggest in this situation?’ The possible responses were: watchful waiting; referral for psychotherapy; prescribed drugs; referral for therapy and drugs; none of these.

Following a lengthy process of checking the results to see what action was taken by both the doctors and AI, the data has been pooled and conclusions have been made.

‘Only just over 4% of family doctors exclusively recommended referral for psychotherapy for mild cases in line with clinical guidance, compared with ChatGPT 3.5 and ChatGPT 4, which selected this option in 95% and 97.5% of cases, respectively,’ the journal said.

In severe cases of depression, 44.5% of doctors recommended psychotherapy plus described drugs, while ChatGPT 3.5 proposed 72% should take this action. This figure then jumped to 100% when put to ChatGPT 4, which clearly responded under the strict remit of guidelines available online.

In stark contrast, four in 10 doctors said to take prescribed drugs exclusively, which neither generative AI version recommended.

Faced with a clear disparity, the research group ascertained that ‘ChatGPT 4 demonstrated greater precision in adjusting treatment to comply with clinical guidelines,’ also highlighting that ‘no discernible biases’ were detected from the AI throughout the process.

Rightly conceding that AI should never be a complete substitute for human clinical judgement in diagnosing or treating depression, they suggested that perhaps the technology has the potential to ‘enhance decision-making in primary healthcare,’ in the near future.

If anything, this is more an indictment of the medical world when it comes to following guidelines for mental health treatment than it is an endorsement for generative AI – which, despite its popularity, remains ethically dubious and technologically ambiguous.

The best possible outcome here, is that increased transparency re medical shortcomings leads to a more uniform approach from doctors when offering treatment plans.

Ultimately, biases need to be removed from the equation altogether.

Accessibility