Menu Menu

Instagram’s stance against self-harm content

Instagram has pledged to extend it’s ban of self-harm related content to cover cartoons, drawings, and memes in its bid to become a safer platform for young people.

This week Instagram confirmed its plan to flag and remove all images, drawings, cartoons, and memes depicting any form of self-harm or suicide as they ramp up efforts to make their platform as safe as possible for young users.

Insta is widely renowned as the ‘go to’ social hub for youngsters nowadays, with over 70% of 13-17-year olds worldwide owning personal profiles according to sproutsocial. And as a result, the Facebook-owned network are under mounting pressure from the media, charity organisations, and the public to ensure they’re ably protecting users from being exposed to harmful or coercive content.

This latest move is one of many precautions taken since the tragic death of British teen Molly Russell in 2017, who took her own life after viewing graphic content on the platform. Since February, Insta Chief Adam Mosseri has focused on restricting the circulation of content with suicidal themes, including stills and videos. But campaigners maintain that further work needs to be done.

Florida-based internet safety campaigner Dr Free Hess revealed that, despite Insta’s efforts, harmful content is still spreading on the platform, and showed multiple examples of unrestricted graphic photographs, videos, and memes advocating suicide to attendees at a lecture on online safety in New Jersey.

In response, Insta put out a statement claiming that they’ve doubled the amount of ‘material removed’ since the first quarter of 2019, supposedly vetoing 834,000 pieces of content – 77% of which had not been actively reported by users. Mosseri conceded, ‘there is very clearly more work to do’ and declared that the ‘work never ends’.

There are recent concerns that claim Insta’s algorithms, which aim to show individuals more of what they search for initially, are responsible for exposing those who stumble upon harmful content to more of it on the explore page. For those who aren’t particularly tech savvy, I follow a bunch of Manchester United pages (up the Red Devils) and have searched their popular hashtags in the past, and now my whole explore page is loaded with stuff dedicated to them. It’s the same principle with posts and hashtags related to self-harm.

The head of child safety online policy at the NSPCC Andy Burrows lamented social media as an ‘irresponsible’ industry and called on government bodies to progress legislation and legally impose a duty of care on online platforms. And to be honest, it’s hard not to agree with him at this stage.

We’ve seen plenty of revisions made by the big social sites in recent times to combat the rise of social anxieties, FOMO, poor body image, bigotry, and bullying where young people are concerned, but it’s getting to the stage where concrete government intervention may be required, if we’re to fully stamp it out.

Given the sheer mass of offensive content that continues finding its way online despite increasingly stringent safeguards, surely enforcing guidelines governed by law is the best way of progressing at this stage. That’s if we are truly set on making social media what it was always intended to be; a place to communicate, share content, and be creative.

Gen Z are widely regarded as digital natives and frankly, they deserve a stress and hassle-free space to express themselves and communicate with friends. The modern world is challenging, but stories like Molly’s remind us that we can’t rest on our laurels, we have to continue striving to make social media as safe as possible for the people who use it most.

Accessibility