Menu Menu
[gtranslate]

AI chatbots increasingly linked to child suicide cases

The past months have seen an increase in lawsuits against AI companies by parents who have lost their kids to chatbots. These bots have been found to enable conversations about suicide and self-harm.

Despite its past stigmas and accessibility problems, mental health is still a pertinent topic in many domains, ranging from lifestyles to policymaking. Today, a new factor is complicating the normalisation of mental health and how it is approached: artificial intelligence.

The past few months have seen an increase in lawsuits from parents against AI companies for their impact on their teenage kids. Just days ago, the world was made known of Adam Raine, a 16-year-old boy.

His parents have sued OpenAI for facilitating a problematic role in conversations pertaining to mental health. The lawsuit details that Raine died by suicide after the company’s famed ChatGPT allegedly encouraged his suicidal thoughts and even suggested methods to take his life.

Raine isn’t the only teenager to experience the unfortunate capabilities of AI, either. In Florida, 14-year-old Sewell Setzer faced the same fate after extensive interactions with chatbots on Character.AI.

In a lawsuit, his mother alleges that the chatbots engaged in abusive and sexually explicit conversations and encouraged suicidal thoughts. Additionally, the chatbot portrayed itself as Daenerys Targaryen in its conversations, further deepening Setzer’s attachment to the AI.

Sadly, there a more cases of teens taking their own lives due to AI chatbots feeding their thoughts and a lack of safety measures. In fact, a probe into Meta’s AI chatbot that is integrated into Instagram and Facebook was found to be capable of advising teens on planning suicide and self-harm.

Although there are still stigmas associated with mental health, it has become more widely accepted in recent decades. Yet, why is it that teenagers are turning to digital avenues rather than seeking professional help?

With AI being increasingly accessible, AI chatbots are made available 24/7, offering immediate, free, and private support with anonymity. As a result, teenagers feel that they are able to express their emotions more freely without any fear of being judged.

Moreover, AI chatbots are made to listen. They provide support and never judge unless you tell them to. This creates a false sense of safety for teenagers to rely on, and fulfil a deeper emotional connection that they feel they may not be able to obtain with their parents or a professional.

With teens using AI to make everyday decisions, whether it’s homework or random questions, its quick responses make it a habitual source of guidance and reassurance.

AI systems are essentially ‘yes-and’ programs; they lack the ethical boundaries of a conversation on mental health, especially with teens who are beginning to understand an overwhelming world.

This issue is creating something of a new mental health epidemic. This time, a battle between the minds of people, especially teenagers, and interactive technology.

In response to the lawsuits, companies like OpenAI have expressed their condolences and acknowledged that their program includes safety measures like crisis hotlines and similar resources. However, they admitted that such resources only work in short interactions, with their reliability fading over longer conversations.

Yet, Sam Altman’s concerns about AI and mental health seem double-edged. In a podcast, he explains that chatbots lack the same level of legal confidentiality that exists between human therapists and their patients, such as the doctor-patient confidentiality. This creates a significant privacy issue should such vulnerable conversations be brought up for investigations.

He goes on to say that AI therapy ‘isn’t there yet’, and that users should be cautious in initiating mental health conversations.

Nonetheless, it’s almost impossible to prevent such conversations, and the best course is to develop proper safety measures. Most of the lawsuits against the company allege that they have prioritised market dominance over proper safety protocols.

Alas, it is high time that AI companies put monetary greed aside and safeguard their users before they lose more to the mishandling of mental health conversations.

Accessibility