Menu Menu

New Instagram and Twitter updates target harmful online behaviours

The latest Twitter update will prompt users to reconsider the ‘vibe’ of a heated discussion, while an Instagram ‘nudge’ feature aims to steer teenagers away from harmful content.

Despite a newfound urgency in recent years to finally address online hate and content tied to adverse mental health, the big social media platforms are still accused of being too reactionary and not proactive enough.

When it comes to Gen Z specifically, the most introspective and anxious generation yet – also incidentally known as the digital natives – recent events suggest there’s definitely room for improvement.

Only last month, reports emerged from the Wall Street Journal revealing concrete ties between the use of Instagram and mental health issues surrounding body image, particularly among teen girls.

For those of us who use these platforms on a daily basis, revelations that social media isn’t ‘broadly positive’ in its overall influence aren’t surprising. However, only now is renewed pressure starting to get through to those at the top. Instagram Kids certainly couldn’t exist in the current landscape, that’s for damn sure.

As part of a change of tact and strategy focused on making social networking genuinely safer and more inclusive, Instagram and Twitter have announced some initial changes.

 

Instagram curbing spiralling behaviours

Confronting the indictments levelled at it just a month earlier, Instagram is working on mechanics to steer young people away from spiralling behaviours on its app.

We’re all guilty of ending up down rabbit holes on Instagram, but the app is working on solutions to protect those of us with a tendency to get bogged down in types of content linked to self-deprecation.

Say, for instance, the app recognises a user has been dwelling on beauty influencer pages for a significant period of time. Theoretically, a new algorithm will prompt a pop-up recommending the user to switch up the type of content they’re viewing.

Nick Clegg, the Vice President of communications at Facebook, believes such a mechanic will make a ‘considerable difference’ to how young people use the app and limit their time viewing content ‘not conducive to their well-being.’

Beyond this, the company is also working on a feature called ‘Take a Break,’ which aims to cater to Gen Z users who frequently pause their social media usage. For those whose online data shows signs of obsessive habits, Instagram will suggest activating the option for a brief period of time.

 

Twitter’s efforts to limit abuse

While Instagram is focusing on underlying behaviours with the potential to cause damage, Twitter is tackling an entirely different beast.

Favoured among politicians and public figures for its spontaneous nature, Twitter’s variety of off-the-cuff discourse can often spill into hostile arguments and the spread of abuse. Just ask any Twitter going football fan.

In order to stop the more ‘intense’ conversations from spilling into personal attacks, which will inevitably draw others to the toxicity, Twitter is testing several measures to be introduced in the near future.

Announcing planned changes on – you guessed it – Twitter, its support team revealed a ‘vibe check’ update that will soon give users a heads up if the thread they’re about to engage in is likely to get heated.

If a tweet/reply is flagged by either background algorithms or user reports, conversations deemed potentially inflammatory will trigger disclaimer messages for anyone attempting to comment, complete with titbits of valuable advice.

This includes healthy conversation pointers like ‘remember the human, facts matter, and diverse perspectives have value.’

For those who charge on ahead abusing other users or spreading hate, and there will be many, Twitter goers are encouraged to continue making use of last year’s options to limit replies and automatically block.

The start of meaningful change?

In the immediate aim of making social media platforms less toxic environments to spend hours and hours engrossed in, these two isolated examples are much needed changes that must be followed with many more.

As recent hate raids on Twitch and the debacle of the UEFA Euros final demonstrated, the absence of any user identification on social media will allow people to continue offending with complete anonymity.

So long as the likes of Facebook, Instagram, Twitter, etc, fail to enforce such changes, they will be met with pressure to make their networks as safe as possible and will be rightfully criticised for any shortcomings.

When it comes to reckoning with their own mental health impacts, social media apps now have professional support resources and tend to lean heavily on them. To date, however, it appears they’re not enough in isolation.

Instead, many are calling for regular transparency reports as an essential change. Through making data publicly available, running polls, and generally improving collaboration with users, these apps can refine their mechanics to better prevent destructive habits.

Either way, hopefully this is just the beginning for meaningful revisions in the months ahead.

Accessibility