Menu Menu

How ‘algospeak’ aids sensitive social media discussion

With life online increasingly controlled by overzealous censorship policies, users have begun inventing code words to dodge strict moderations on platforms like TikTok.   

Though you may not be familiar with the phrase ‘algospeak,’ what you’re likely to have come across is the slew of code words Internet users have been adopting to bypass content moderation filters on major platforms.

For the unacquainted, ‘spicy eggplant’ rather than vibrator, ‘seggs’ rather than sex, and ‘le dollar bean’ rather than lesbian are some examples.

These obscure terms are, in principle, a way for people to discuss sensitive topics on social media without needing to worry about the algorithm removing their posts (or worse, their accounts).

They’re predominantly rife on TikTok, an app controlled by famously overzealous censorship policies which have forced users to devise a myriad of workarounds – including an entirely new vocabulary.

Beyond their use as a tool for circumventing restrictions, however, these lexical variants are largely used by people from marginalised communities for whom the algorithm prevents openly confronting oppression.

This has its complications.

@seansvv Part 1 – “Algospeak” Thank you, @Taylor Lorenz for including my voice in your piece! #algospeak #censorship #discourse #anthropology #culture #language #adaptation ♬ original sound – SEAN

‘There’s a line we have to toe, it’s an unending battle of saying something and trying to get the message across without directly saying it,’ TikTok creator Sean tells The Washington Post.

‘It disproportionately affects the LGBTQIA community and the BIPOC community because we’re the people creating that verbiage and coming up with the colloquiums.’

As Sean touches upon, despite the free speech benefits of algospeak, its heightened use is leading to a rise in miscommunication.

This is especially true for LGBTQIA individuals, who have begun saying ‘cornucopia’ as an alternative to ‘homophobia’. BIPOC users, who are reportedly too nervous to mention ‘racism’ at all, are resorting to hand gestures instead.

Conversations about sexual health, violence, and harassment are also consistently down-ranked, an issue of growing concern among organisations providing support.

As they rightly stress, fighting an algorithm that already discriminates against sex-related content with ambiguity is not only detracting from the gravity of these subject matters, but dehumanising them as well.

Credit: Washington Post

‘It makes me feel like I need a disclaimer because I feel like it makes you seem unprofessional to have these weirdly spelled words in your captions,’ says Kathryn Cross.

‘Especially for content that’s supposed to be serious and medically inclined.’

This raises the question of whether algospeak is quite as effective as users think. A recent incident involving Julia Fox would suggest that perhaps it isn’t.

For some context, it stems from the ‘mascara’ trend on TikTok.

Coined as another type of anti-policing jargon to inexplicitly refer to experiences with sexual assault and trauma, the ascribed hashtag has so far amassed more than 100 million views.

It’s generated some critical discussion points around consent and sexual assault, with many using it to raise awareness of the different forms the latter can take.

@big_whip13 Idk if y’all will get it but #saawareness #foryoupage #GenshinImpact34 #goofy #FastTwitchContest #viral #menspeakup ♬ constellations by duster – ‍

Misunderstanding the purpose behind it, Fox commented ‘idk why but I don’t feel bad for u lol’ on a male user’s video divulging how he ‘gave a girl mascara and it must have been so good that she decided that her and her friend should both try it without [his] consent.’

Ever since, backlash towards the potentially negative repercussions of these euphemisms – namely their indirect contribution to prevailing stigmas – has been on the up.

‘Unfortunately, shame is a really common barrier to victim-survivors accessing support with or without TikTok censorship,’ says Lisa Benjamin of SARSAS.

‘The mascara trend is a creative way for people to share their experiences, but this needs to be coupled with an understanding that these codes and their meanings are quite quickly decoded and may be seen by people who the videos were not intended for, so it’s important people are mindful when sharing information.’

With this in mind, change must evidently come from the source.

While rules and regulations are imperative in protecting consumers, there’s an argument to be made that their increasingly constricting nature is acting more as a hindrance than a help for users seeking to hold nuanced dialogues about themes that need addressing.

Forced to make light of these stories as opposed to finding solace in them, the evasive algospeak tactic (the life cycle of which is moving so fast nobody can keep up) is in fact emblematic of a wider issue surrounding censorship.

‘Topics that would usually command empathy, understanding and sensitivity become trivialised,’ trauma expert Danny Greeves tells the BBC.

‘When that occurs – such as with the mascara trend – deeply painful and traumatic events are minimised.’

As Julia Fox’s blunder demonstrates, getting confused by this ever-evolving slang can happen to anyone who isn’t au fait with the situation at hand and can inadvertently backfire to further isolate victims or breed controversy.

Could it be time for TikTok itself to re-evaluate the stringency of its guidelines so that the voices deserving of a safe space to speak out have the freedom to do so?

I’d say it’s long-overdue.

Accessibility