In only its early days of beta testing, a woman reported being groped in a metaverse called ‘Horizon Worlds’. Immersive VR on the platform makes such experiences all too real, generating calls for stronger safety features.
Recently, the term ‘metaverse’ has become unavoidable, whether you’ve got a keen eye for keeping up with the latest tech and gaming trends or not.
The metaverse is a rapidly growing, virtual reality world where users can go to school, work, play games, watch concerts, go shopping, and interact with others in the community without leaving their actual home.
Tech giant Facebook recently changed its name to Meta with the intention to bring the metaverse further into the mainstream. It has already successfully developed a new platform in the metaverse that is now accessible to the general public.
Enter, Meta’s virtual-reality social media platform called ‘Horizon Worlds’ – which has been compared to Minecraft due to its colourful interface and graphics.
The fun, light-heartedness of the digital world hasn’t lasted for long, though. In late November, one beta tester was groped by a stranger while navigating the platform in VR. She promptly reported her experience in the beta testing group on Facebook.
By now, software developers are well aware that negative social behaviours that exist in the actual world are prone to occur as often – if not more so – in digital spaces. In fact, many anticipate these types of problems when creating interfaces.
Upon reviewing the woman’s account of her experience on Facebook, developers at Meta said that she should’ve activated a tool called ‘Safe Zone’ – one of the key safety features build into the Horizon Worlds platform.
By doing this, a virtual ‘safety bubble’ is activated around the user, rendering others in the metaverse unable to touch, talk, or interact with them until they make the decision to deactivate the Safe Zone feature.
While this tool is helpful when a user feels triggered by anothers’ actions, it does not stop the issue of harassment in digital spaces from happening in the first place.
Nor does it protect from the victim’s psychological or physiological responses that come afterward – especially in platforms where immersive VR is used, making the experience feel extremely realistic.
Acknowledging the severity of the problem
Responses on the post which recounted being groped in Horizon World were varied, but many Facebook users attempted to diminish her experience, saying that what had happened was ‘no big deal’ simply because it didn’t happen in the actual world.
However, researchers at the Digital Games Research Association have pointed out that instances of toxic behaviour (such as sexual harassment and bullying) in virtual spaces can be as just as harmful as they are in person.
Especially when immersive VR is used, these negative experiences are heightened, meaning the social implications of virtual and verbal actions can be extremely triggering for those targeted.
‘At the end of the day, the nature of virtual-reality spaces is such that it is designed to trick the user into thinking they are physically in a certain space, that their every bodily action is occurring in a 3D environment,’ said Katherine Cross, a researcher of online harassment at the University of Washington.
When people have a screen to hide behind, it’s easier for individuals to act more recklessly than they would in person, due to lack of measurable consequences. This phenomenon has been coined by researchers as the ‘online disinhibition effect’.
And just as people of all age groups became accustomed to using Facebook during the early 2010s, it’s likely large numbers of people will begin entering the metaverse on a daily basis in the not-so-distant future.
Before this happens though, Meta (and other metaverse developers) should make sure they’ve covered as many safety bases as possible – such as disallowing features or character actions that have potential to be abused or used to make other users feel unsafe and uncomfortable.
Speaking on the subject, Horizon World’s vice president called the beta testing incident ‘absolutely unfortunate’ and described the beta tester’s feedback as valuable. He added that the company will continue to improve the functionality and accessibility of the platform’s ‘block user’ feature going forward.
Deputy Editor & Content Partnership ManagerLondon, UK
I’m Jessica (She/Her). I’m the Deputy Editor & Content Partnership Manager at Thred. Originally from the island of Bermuda, I specialise in writing about ocean health and marine conservation, but you can also find me delving into pop culture, health and wellness, plus sustainability in the beauty and fashion industries. Follow me on Twitter, LinkedIn and drop me some ideas/feedback via email.
Popular short-form video platform TikTok has blocked searches for ‘skinnytok’ in an effort to reduce the popularity of potentially harmful content. The social media app is notorious for cultivating niche communities based on negative subject matter.
TikTok has blocked users from searching for ‘skinnytok,’ a trending term that directs people toward content promoting unhealthy eating habits and disorders.
People who use the hashtag will now be directed to mental health...
Unionised quality assurance testers at ZeniMax have announced that they’ve reached a deal with Microsoft after two years of negotiations. It follows a recent trend of more coherence amongst video game workers to demand greater pay and rights.
A new deal has been reached between unionised quality assurance testers at ZeniMax and Microsoft, following two years of negotiations.
ZeniMax is a video game publisher and holding company, perhaps best known...
Artificial intelligence is coming for your job – and according to a new report, it’s coming faster if you’re a woman.
For technology designed to eliminate benign work, artificial intelligence is proving remarkably efficient at replicating workplace inequality. Not that it will surprise any woman under the sun, but a new report from the UN’s International Labour Organisation has confirmed women are set to bear the brunt of AI-driven job...
New research by Enders Analysis suggests that premium video services are being pirated via alternative streams on an ‘industrial scale.’ The firm says a lack of action by tech companies is to blame.
A study by Enders Analysis has claimed that big tech firms like Amazon, Google, Meta and Microsoft are not doing enough to clamp down on streaming piracy.
Premium broadcasting for sporting events is of particular concern, with...
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok