In only its early days of beta testing, a woman reported being groped in a metaverse called βHorizon Worldsβ. Immersive VR on the platform makes such experiences all too real, generating calls for stronger safety features.
Recently, the term βmetaverseβ has become unavoidable, whether youβve got a keen eye for keeping up with the latest tech and gaming trends or not.
The metaverse is a rapidly growing, virtual reality world where users can go to school, work, play games, watch concerts, go shopping, and interact with others in the community without leaving their actual home.
Tech giant Facebook recently changed its name to Meta with the intention to bring the metaverse further into the mainstream. It has already successfully developed a new platform in the metaverse that is now accessible to the general public.
Enter, Metaβs virtual-reality social media platform called βHorizon Worldsβ β which has been compared to Minecraft due to its colourful interface and graphics.
The fun, light-heartedness of the digital world hasnβt lasted for long, though. In late November, one beta tester was groped by a stranger while navigating the platform in VR. She promptly reported her experience in the beta testing group on Facebook.
By now, software developers are well aware that negative social behaviours that exist in the actual world are prone to occur as often β if not more so β in digital spaces. In fact, many anticipate these types of problems when creating interfaces.
Upon reviewing the womanβs account of her experience on Facebook, developers at Meta said that she shouldβve activated a tool called βSafe Zoneβ β one of the key safety features build into the Horizon Worlds platform.
By doing this, a virtual βsafety bubbleβ is activated around the user, rendering others in the metaverse unable to touch, talk, or interact with them until they make the decision to deactivate the Safe Zone feature.
While this tool is helpful when a user feels triggered by anothersβ actions, it does not stop the issue of harassment in digital spaces from happening in the first place.
Nor does it protect from the victimβs psychological or physiological responses that come afterward β especially in platforms where immersive VR is used, making the experience feel extremely realistic.
Acknowledging the severity of the problem
Responses on the post which recounted being groped in Horizon World were varied, but many Facebook users attempted to diminish her experience, saying that what had happened was βno big dealβ simply because it didnβt happen in the actual world.
However, researchers at the Digital Games Research Association have pointed out that instances of toxic behaviour (such as sexual harassment and bullying) in virtual spaces can be as just as harmful as they are in person.
Especially when immersive VR is used, these negative experiences are heightened, meaning the social implications of virtual and verbal actions can be extremely triggering for those targeted.
βAt the end of the day, the nature of virtual-reality spaces is such that it is designed to trick the user into thinking they are physically in a certain space, that their every bodily action is occurring in a 3D environment,β said Katherine Cross, a researcher of online harassment at the University of Washington.
When people have a screen to hide behind, itβs easier for individuals to act more recklessly than they would in person, due to lack of measurable consequences. This phenomenon has been coined by researchers as the βonline disinhibition effectβ.
And just as people of all age groups became accustomed to using Facebook during the early 2010s, itβs likely large numbers of people will begin entering the metaverse on a daily basis in the not-so-distant future.
Before this happens though, Meta (and other metaverse developers) should make sure theyβve covered as many safety bases as possible β such as disallowing features or character actions that have potential to be abused or used to make other users feel unsafe and uncomfortable.
Speaking on the subject, Horizon Worldβs vice president called the beta testing incident βabsolutely unfortunateβ and described the beta testerβs feedback as valuable. He added that the company will continue to improve the functionality and accessibility of the platformβs βblock userβ feature going forward.
Deputy Editor & Content Partnership ManagerLondon, UK
Iβm Jessica (She/Her). Iβm the Deputy Editor & Content Partnership Manager at Thred. Originally from the island of Bermuda, I specialise in writing about ocean health and marine conservation, but you can also find me delving into pop culture, health and wellness, plus sustainability in the beauty and fashion industries. Follow me on Twitter, LinkedIn and drop me some ideas/feedback via email.
The companyβs CEO Mark Zuckerberg has announced that it will be investing hundreds of billions on AI centres that will cover an area almost as large as Manhattan.
Mark Zuckerberg has announced that Meta will be splashing out the cash and investing heavily in AI data centres across the US.
The first will be called Prometheus and is planned to be booted up in 2026. The CEO said that one...
The deliveries of eight infants in the UK via a three-person IVF technique have astonished the world. Nonetheless, its connection to a largely prohibited type of genome editing has raised concerns among the scientific community.
Science has once again expanded its frontiers in a landmark procedure that conceived babies via IVF through the DNA of three people. If you feel that this contradicts the fundamentals of conception that we were...
Teens are turning to chatbots for comfort. It might sound dystopian, but in our increasingly isolated world it makes a lot of sense.
It starts innocently enough. A 15-year-old girl downloads an AI chatbot app like Replika or Character.AI, just to see what all the TikTok buzz is about. Maybe she names it Max. Max is charming, always awake, never distracted. He remembers her favorite color, checks in after a...
People continue to seek out βnudifyβ websites to create explicit AI images without a personβs consent. A recent analysis of 85 such platforms revealed that, collectively, they could be making up to $36 million annually.
Weβve been writing about nonconsensual deepfakes for well over seven years now, and the problem doesnβt seem to be deescalating.
Piggybacking the widescale proliferation of generative AI capabilities, illicit websites continue to spawn under the creepy...
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.