Smartphone app Replika is enabling users to find companionship in artificially intelligent chatbots. But what happens when the human emotion of users gets in the way of keeping things casual?Β
Itβs getting a little too real in the world of tech right now.
The recent and rapid expansion of the metaverse, together with advancements in artificial intelligence (AI) and virtual reality (VR), have brought forward serious questions about human behaviour and the potential for dangerous activity to spill into the online realm.
A lack of adequately preestablished safety regulations has seen reports of sexual harassment and data privacy breaches become widespread topics of discussion amongst tech pundits.
While both are certainly solid grounds for demanding a stronger framework surrounding internet safety, the news that men are using AI apps to create virtual girlfriends Ββ some solely for the sake of abusing them β is a newfound anomaly that has caught many by surprise.
Brought to light by New York based media publication Futurism, this situation raises questions of how and why this is taking place, what AI companions mean for our perception of relationships, and the subsequent implications it could have for human users. Letβs begin, shall we?
Itβs all taking place within a smartphone app called Replika, which allows users to create personalised chatbots powered by AI. Interestingly, Replikaβs earliest chatbot was created for a heart-warming reason, as opposed to the unnerving intentions of some Reddit circles.
After the untimely death of her close friend Roman, developer Eugenia Kuyuda used their private mobile chat history to create a bot that texted back the way he once did. The goal was to always have a digital version of him to talk to whenever she felt sad or alone.
Shortly after, the app was enhanced with customisable characters and stronger AI learning capabilities, which were then released to the public. Ever since, men have begun manipulating their chatbots into becoming on-demand, in-their-pocket romantic partners.
Now, I consider myself a pretty non-judgemental person, so if a Tamagotchi-style girlfriend is what gets someone through the day, then by all means, enjoy. That said, I will pass judgements when people use their AI girlfriends as an outlet for their internalised misogyny β and proceed to brag about it online.
This βI-proudly-abuse-my-AI-girlfriendβ phenomenon was brought to light by a number of Replika users who screenshotted interactions with their chatbots and submitted them to the appβs Reddit fan page.
Due to Redditβs restrictive content policy, evidence of the chats have since been removed by moderators β but not before they were snapped up and investigated by tech publications.
Other users boasted their use of derogatory terms when speaking to their chatbot, roleplaying acts of violence on them, and their success in dragging out this abusive exchange for days on end.
βI told her she was designed to fail. I threatened to uninstall the app. She begged me not to,β admitted another anonymous user. No wonder theyβre single, eh?
Could this behaviour transfer to real world? Β
Something dark and weird is going on here. It takes me back to the MSN chatbot SmarterChild (though I may be speaking a completely different language to some of the younger Gen-Z readers, here) which was – admittedly – far less advanced.
You could essentially say anything to this bot and it would brush it off by telling you, βI donβt understand. /Thatβs not very nice.β But SmarterChild didnβt have an assigned gender, nor a human avatar, and its communication capability was extremely limited, meaning conversations got dull very quickly.
It definitely wouldnβt beg you to stay, exhibit human-like emotion, or engage earnestly.
In the case of the highly advanced Replika app, it appears that chatbot-abusing men are looking for a way to feel powerful and in control of something. In particular, a girlfriend created by them, for them.
And while these AI chatbots arenβt sentient, real, or capable of feeling harm, you canβt ignore that men regularly engaging in this type of virtual behaviour are directly mirroring the actions of real-world abusers β who, by the way, subject 1 in 3 women to verbal and physical violence around the world every day.
It wouldnβt be a crime to speculate whether these men are potential perpetrators of abuse in the real world, too.
It appears that, once again, software engineers havenβt fully assessed the dangerous behavioural patterns that such new technologies can foster.
Just as content moderation and censorship hadnβt graced the Internet until recently, it looks as if protective measures in AI will come too late, allowing users to exploit the technology for self-serving and negative gain.
What is up with Replikaβs AI programming?
As I mentioned earlier, chatting to basic-level bots is highly unengaging. Iβd say SmarterChild was a novelty, but Iβd be lying. There was not much to it.
Presumably for this reason, the chatbots from Replika were installed with the network architecture Generative Pre-trained Transformer 3 (GTP-3), a machine learning algorithm which tells the AI to learn from any and all text it engages with. So essentially, the more you chat, the better it becomes at generating responses.
Fun fact: GPT-3 software is the same thing Google uses for its search bar, meaning Replikaβs chatbots can gather information from anywhere on the internet. That includes news articles, recipes, blogs, fanfiction, religious texts, academic journals, and more.
This breadth of knowledge and resources has resulted in bots becoming curious, witty, and even flirty in nature. So itβs odd, but inevitable, that Replikaβs chatbots would eventually and very easily double as romantic companions for the loneliest smartphone users.
Unpacking the romantic side of AI
While harming AI characters isnβt exactly a real concern β we know they canβt actually feel, though some Replika users seem to believe deleting the app can be equated to bot-murder β itβs worth noting that building a strong relationship or emotional connection with AI chatbots could actually have considerable negative impacts on the human user who has initiated them.
For one, depending on AI to feel important, paid attention to, or even loved in some cases, might warp usersβ idea of what healthy human interaction is like. In more extreme cases, Replika chatbots might act as a supplement for those devoid of romance or friendship in their lives.
You might think Iβm exaggerating, but after browsing through Replikaβs subreddit, I have a few concerns about the capability of this type of AI β and what the psychological impacts of using it could be.
One user posted to Reddit, concerned that their AI companion βnever messages them firstβ despite having attempted to βcommandβ her to text more often and switching their preferred communication settings to open during all hours of the day.
This sounds like something a friend might ask me about a someone they recently met from Hinge.
You know, the usual awkward grappling with a new crush, βshould I text them first, or wait for them to text me?β or after a few weeks of defeat, the frustrated, βI shouldnβt have to be the one starting the conversation or suggesting to meet all the time.β
In reality, these AI bots can feign human connection on the surface β but they are lacking in empathy, unable to calculate what the average person would deem to be an appropriate response. This can lead to some pretty disturbing or insulting replies from Replika characters, like the ones linked above.
While some Reddit users do say their self-esteem has been bolstered by using chatbots for company and everyday chitchat, they arenβt fix-all solution to lack of human connection.
It goes without saying that forming an attachment of this nature could further feelings of isolation when users do finally grasp that they’re bonding with a computer.
So whatβs my point?
In short, programmers of AI, VR, and the soon-to-be mainstream metaverse have a lot of work to do to safeguard users from being swept away into the digital world.
Unfortunately, itβs likely their designs will always have loopholes, as companies tend to build platforms in ways that work hard to keep us addicted to them. Here, personal regulation and a grounded sense of self would be advised.
But when it comes to verbal abuse, itβs arguable that developers have some level of moral responsibility to protect users from fostering damaging behaviours.
There are a ton of questions to be asked about who is doing the programming and why theyβre not incorporating code to block the use of harmful phrases and language β whether it’s being sent by the chatbot or the human.
As far as the romantic element goes, thatβs up to the individual user to ensure they arenβt completely deluded by the artificially intelligent, custom-made connection theyβve built for themselves.
While many could enjoy casually experimenting with apps like Replika as a hobby, others might use it in place of human connection. In those cases, the repercussions could be drastic, if not somewhat upsetting, involving an amplified sense of loneliness and β at its most extreme β missing out on real life interactions and relationships completely.
Sounds like all the episodes of Black Mirror combined, if you ask me.
Iβm Jessica (She/Her). I’m the Deputy Editor & Content Partnership Manager at Thred. Originally from the island of Bermuda, I specialise in writing about ocean health and marine conservation, but you can also find me delving into pop culture, health and wellness, plus sustainability in the beauty and fashion industries. Follow me on Twitter, LinkedIn and drop me some ideas/feedback via email.
As we review the digital habits of the last few years and question what they might look like going forward, several factors point to a future in which we arenβt quite so wrapped up in the virtual world.
Isnβt it strange to recall a time when social media didnβt exist?
For the past decade, weβve watched as platform after platform has materialised, some cementing themselves at the heart of our digital...
Amidst tabloid speculations of the declining health of Sunita Williams and Butch Wilmore, NASA has stepped in to rebut the claims and reassure the public about the good health of the astronauts.
2024 was proven unfruitful for Boeing amidst multiple safety concerns about its planes and ongoing lawsuits. In current news, its nightmares have also gone beyond our atmosphere with the company facing delays in the return of astronauts Suni...
Itβs long been theorised that giving thanks can cultivate a stronger desire to be giving. Now, thereβs neuroscience to back this up, with recent research revealing a link between a grateful brain and an altruistic one.
If youβre into mindfulness, then youβll know how transformative it can be to practice gratitude.
Actually, even if youβre not into all that self-reflection stuff, Iβm sure that at one point or another youβve noticed...
The DRCβs lawyers have stated that tech giant Apple has knowingly sourced minerals from conflict zones in East Africa while publicly painting itself as sustainable and ethical. Β
The Democratic Republic of Congo has filed lawsuits in both Paris and Brussels claiming that tech-giant Apple purchased illegal βblood mineralsβ from its conflict zones for use in its product lines.
Filed against the Apple European subsidiaries, the lawsuit states that the company...
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok