Menu Menu
[gtranslate]

Sam Altman warns against young people using ChatGPT for therapy

ChatGPT chief, Sam Altman, just warned against using the generative AI as a therapist or life coach. The technology ‘isn’t ready yet,’ he said.

The media had us believe we would be jobless thanks to artificial intelligence by now, but the fearmongering of 2023 didn’t quite land.

On the contrary, people are embracing AI, particularly generative chatbots, for just about every use you can think of. The demographic showing the greatest appetite for the technology, unsurprisingly, is young people.

We’re not talking about just scheduling social media posts or rapidly generating coursework, either. We’ve written copiously about how heavily Gen Z relies on platforms like ChatGPT for everyday advice, a sense of connection, and even therapy – Check out Flo’s recent article on the popularity of AI companions.

In March alone, 16.7 million TikTok posts referenced using ChatGPT as a tool for therapy with the vast majority of creators being Gen Zers. Taking to AI like a duck to water, some ‘digital natives’ have grown their parasocial relationship with ChatGPT beyond recreation and into a sort of dependence. And it’s easy to see why.

Beyond the obvious perks of 24/7 availability, affordability, and being a judgement-free forum, several studies have suggested that AI may actually provide more concise and uninhibited advice (as per medical standards) than some mental health professionals.

Given roughly one third of US teens have admitted to using generative AI for therapy, you’d expect the tycoons behind the industry’s big players would be thrilled about the boon to their user traffic.

What you wouldn’t expect is, say, ChatGPT chief Sam Altman to warn against engaging with generative AI for life advice or emotional support. Over the weekend, however, he did exactly that in a public conversation with Theo Van.

During the candid chat, Altman cautioned that the lack of regulation surrounding AI and ‘patient confidentiality’ could be a problem for users in any legal scenario.

‘People talk about the most personal shit in their lives to ChatGPT. People use it — young people, especially, use it — as a therapist and a life coach; they are having these relationship problems and [asking], “What should I do?”’, he explained.

‘Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it. There’s doctor-patient confidentiality, there’s legal confidentiality, whatever. And we haven’t figured that out yet for when you talk to ChatGPT.’

Altman conceded that these conversations being accessible by federal agencies (if requested) is ‘very screwed up,’ and urged users not to be too forthcoming with sensitive or private information on ChatGPT.

In all likelihood these warnings will fall on deaf ears, nonetheless. Adoption of generative AI has been so all-encompassing, and young people don’t necessarily have an inherent fear of where their data ends up. It’s the internet’s cost of entry, after all.

For those who are proactive with protecting their digital footprint, the balance often still tips dramatically in favour of convenience. In other words, the benefits of ChatGPT and all it offers vastly outweighs the risks.

Hopefully, more comprehensive user protections aren’t too far away, but it’s refreshing to hear someone so personally connected to generative AI speak so honestly about its dangers. Fair play, Altman.

Accessibility