Menu Menu

TikTok launches slew of safety features including For You filtering

TikTok is rolling out a slew of new safety features, including ‘maturity’ ratings on videos for young users, an algorithm that prevents bulk viewing of problematic content, and the ability to filter in the For You page using words or hashtags.

Tuning your algorithm on TikTok can require a ton of sifting through some pretty sketchy content. Certainly if you were late to the game, like I was.

Despite its impressive rep as the most popular app of 2021, it has faced near constant scrutiny since its launch for a distinct lack of regulatory measures. Only this month, a group of American parents sued the platform after their children died attempting dangerous challenges seen on TikTok.

Although explicit adult content (involving nudity or pornography) is categorically banned, TikTok concedes that some of its videos may contain ‘mature or complex themes that reflect personal experiences or real-world events intended for older audiences.’ After all, with how bleak life can be at times, it’s how we blow off steam.

I can, however, attest to the idea that the app’s algorithms are peculiar at times. Only last week I was inexplicably taken from Love Island and gaming videos to street brawls, for instance. The Wall Street Journal has also demonstrated how the app can push young users down rabbit hole of drug related content.

Thankfully, in order to ensure that younger cohorts are better protected from content they shouldn’t be seeing – theoretically, that is – TikTok is now ready to roll out a slew of safety features that have been in its wheelhouse for some time.

Part of this, a forthcoming system dubbed ‘Content Levels’ aims to classify all uploaded videos with a maturity grade, similar to the age ratings put on movies, TV shows, and games. Those exploring overtly adult themes will have to be above 18.

In terms of what is considered mature, TikTok’s trust and safety moderators will pick up on content receiving numerous reports within the app. Makes sense, though it remains to be seen how fast or effective this process can be.

Previously, TikTok said content creators may be asked to tag their videos appropriately which will likely be more useful.

While this is perpetually running, a secondary application will aim to identify videos deemed okay in moderation but potentially harmful in bulk viewing – such as extreme dieting fads, or spiralling content on mental health, etc – and will shift onto other topics.

Lastly, and probably most importantly, TikTok is handing control back to users if they don’t wish to rely solely on moderators and machine learning to do an adequate job. A smart move, considering the number of instances that hateful material has slipped through the cracks.

Similar to muting words on Twitter, a filtering tool can accept manually inputted hashtags or words to prevent a certain type of triggering content appearing. TikTok suggests you could block dairy or meat recipes if you were going vegan, for example, though people will likely be axing far more nefarious stuff than that.

Considering that 1bn people used TikTok in a single month, it’s understandable that regulatory lapses will be made in controlling what we consume across the globe. Nevertheless, handing some agency back to the users will be well received.

The updates will roll out in the coming weeks, so have your hashtags ready.

Accessibility