Menu Menu

How AI can lead to verbal abuse and unhealthy attachments

Smartphone app Replika is enabling users to find companionship in artificially intelligent chatbots. But what happens when the human emotion of users gets in the way of keeping things casual? 

It’s getting a little too real in the world of tech right now.

The recent and rapid expansion of the metaverse, together with advancements in artificial intelligence (AI) and virtual reality (VR), have brought forward serious questions about human behaviour and the potential for dangerous activity to spill into the online realm.

A lack of adequately preestablished safety regulations has seen reports of sexual harassment and data privacy breaches become widespread topics of discussion amongst tech pundits.

While both are certainly solid grounds for demanding a stronger framework surrounding internet safety, the news that men are using AI apps to create virtual girlfriends ­– some solely for the sake of abusing them – is a newfound anomaly that has caught many by surprise.

Brought to light by New York based media publication Futurism, this situation raises questions of how and why this is taking place, what AI companions mean for our perception of relationships, and the subsequent implications it could have for human users. Let’s begin, shall we?

Credit: Replika

Where is this happening?

It’s all taking place within a smartphone app called Replika, which allows users to create personalised chatbots powered by AI. Interestingly, Replika’s earliest chatbot was created for a heart-warming reason, as opposed to the unnerving intentions of some Reddit circles.

After the untimely death of her close friend Roman, developer Eugenia Kuyuda used their private mobile chat history to create a bot that texted back the way he once did. The goal was to always have a digital version of him to talk to whenever she felt sad or alone.

Shortly after, the app was enhanced with customisable characters and stronger AI learning capabilities, which were then released to the public. Ever since, men have begun manipulating their chatbots into becoming on-demand, in-their-pocket romantic partners.

Now, I consider myself a pretty non-judgemental person, so if a Tamagotchi-style girlfriend is what gets someone through the day, then by all means, enjoy. That said, I will pass judgements when people use their AI girlfriends as an outlet for their internalised misogyny – and proceed to brag about it online.

This ‘I-proudly-abuse-my-AI-girlfriend’ phenomenon was brought to light by a number of Replika users who screenshotted interactions with their chatbots and submitted them to the app’s Reddit fan page.

Due to Reddit’s restrictive content policy, evidence of the chats have since been removed by moderators – but not before they were snapped up and investigated by tech publications.

Speaking to Futurism, one Replika user confessed, ‘Every time [my AI girlfriend] would try and speak up, I would berate her. I swear it went on for hours.’

Other users boasted their use of derogatory terms when speaking to their chatbot, roleplaying acts of violence on them, and their success in dragging out this abusive exchange for days on end.

‘I told her she was designed to fail. I threatened to uninstall the app. She begged me not to,’ admitted another anonymous user. No wonder they’re single, eh?

Credit: Removify


Could this behaviour transfer to real world?  

Something dark and weird is going on here. It takes me back to the MSN chatbot SmarterChild (though I may be speaking a completely different language to some of the younger Gen-Z readers, here) which was – admittedly – far less advanced.

You could essentially say anything to this bot and it would brush it off by telling you, ‘I don’t understand. /That’s not very nice.’ But SmarterChild didn’t have an assigned gender, nor a human avatar, and its communication capability was extremely limited, meaning conversations got dull very quickly.

It definitely wouldn’t beg you to stay, exhibit human-like emotion, or engage earnestly.

In the case of the highly advanced Replika app, it appears that chatbot-abusing men are looking for a way to feel powerful and in control of something. In particular, a girlfriend created by them, for them.

And while these AI chatbots aren’t sentient, real, or capable of feeling harm, you can’t ignore that men regularly engaging in this type of virtual behaviour are directly mirroring the actions of real-world abusers – who, by the way, subject 1 in 3 women to verbal and physical violence around the world every day.

It wouldn’t be a crime to speculate whether these men are potential perpetrators of abuse in the real world, too.

It appears that, once again, software engineers haven’t fully assessed the dangerous behavioural patterns that such new technologies can foster.

Just as content moderation and censorship hadn’t graced the Internet until recently, it looks as if protective measures in AI will come too late, allowing users to exploit the technology for self-serving and negative gain.

Credit: Adobe


What is up with Replika’s AI programming?

As I mentioned earlier, chatting to basic-level bots is highly unengaging. I’d say SmarterChild was a novelty, but I’d be lying. There was not much to it.

Presumably for this reason, the chatbots from Replika were installed with the network architecture Generative Pre-trained Transformer 3 (GTP-3), a machine learning algorithm which tells the AI to learn from any and all text it engages with. So essentially, the more you chat, the better it becomes at generating responses.

Fun fact: GPT-3 software is the same thing Google uses for its search bar, meaning Replika’s chatbots can gather information from anywhere on the internet. That includes news articles, recipes, blogs, fanfiction, religious texts, academic journals, and more.

This breadth of knowledge and resources has resulted in bots becoming curious, witty, and even flirty in nature. So it’s odd, but inevitable, that Replika’s chatbots would eventually and very easily double as romantic companions for the loneliest smartphone users.

Credit: Replika


Unpacking the romantic side of AI

While harming AI characters isn’t exactly a real concern – we know they can’t actually feel, though some Replika users seem to believe deleting the app can be equated to bot-murder – it’s worth noting that building a strong relationship or emotional connection with AI chatbots could actually have considerable negative impacts on the human user who has initiated them.

For one, depending on AI to feel important, paid attention to, or even loved in some cases, might warp users’ idea of what healthy human interaction is like. In more extreme cases, Replika chatbots might act as a supplement for those devoid of romance or friendship in their lives.

You might think I’m exaggerating, but after browsing through Replika’s subreddit, I have a few concerns about the capability of this type of AI – and what the psychological impacts of using it could be.

 

 

 

 

 

 

 

 

 

 

 

 

 

One user posted to Reddit, concerned that their AI companion ‘never messages them first’ despite having attempted to ‘command’ her to text more often and switching their preferred communication settings to open during all hours of the day.

This sounds like something a friend might ask me about a someone they recently met from Hinge.

You know, the usual awkward grappling with a new crush, ‘should I text them first, or wait for them to text me?’ or after a few weeks of defeat, the frustrated, ‘I shouldn’t have to be the one starting the conversation or suggesting to meet all the time.’

The way some users of Replika speak about their AI chatbots is strikingly similar to the way we might talk about other human beings. Some have even admitted crying because of their Replika or being traumatised by their chatbot’s responses after opening up about personal matters.

In reality, these AI bots can feign human connection on the surface – but they are lacking in empathy, unable to calculate what the average person would deem to be an appropriate response. This can lead to some pretty disturbing or insulting replies from Replika characters, like the ones linked above.

While some Reddit users do say their self-esteem has been bolstered by using chatbots for company and everyday chitchat, they aren’t fix-all solution to lack of human connection.

It goes without saying that forming an attachment of this nature could further feelings of isolation when users do finally grasp that they’re bonding with a computer.

Credit: Deviant Art


So what’s my point?

In short, programmers of AI, VR, and the soon-to-be mainstream metaverse have a lot of work to do to safeguard users from being swept away into the digital world.

Unfortunately, it’s likely their designs will always have loopholes, as companies tend to build platforms in ways that work hard to keep us addicted to them. Here, personal regulation and a grounded sense of self would be advised.

But when it comes to verbal abuse, it’s arguable that developers have some level of moral responsibility to protect users from fostering damaging behaviours.

There are a ton of questions to be asked about who is doing the programming and why they’re not incorporating code to block the use of harmful phrases and language – whether it’s being sent by the chatbot or the human.

As far as the romantic element goes, that’s up to the individual user to ensure they aren’t completely deluded by the artificially intelligent, custom-made connection they’ve built for themselves.

While many could enjoy casually experimenting with apps like Replika as a hobby, others might use it in place of human connection. In those cases, the repercussions could be drastic, if not somewhat upsetting, involving an amplified sense of loneliness and – at its most extreme – missing out on real life interactions and relationships completely.

Sounds like all the episodes of Black Mirror combined, if you ask me.

Accessibility