With life online increasingly controlled by overzealous censorship policies, users have begun inventing code words to dodge strict moderations on platforms like TikTok.
Though you may not be familiar with the phrase ‘algospeak,’ what you’re likely to have come across is the slew of code words Internet users have been adopting to bypass content moderation filters on major platforms.
For the unacquainted, ‘spicy eggplant’ rather than vibrator, ‘seggs’ rather than sex, and ‘le dollar bean’ rather than lesbian are some examples.
These obscure terms are, in principle, a way for people to discuss sensitive topics on social media without needing to worry about the algorithm removing their posts (or worse, their accounts).
They’re predominantly rife on TikTok, an app controlled by famously overzealous censorship policies which have forced users to devise a myriad of workarounds – including an entirely new vocabulary.
Beyond their use as a tool for circumventing restrictions, however, these lexical variants are largely used by people from marginalised communities for whom the algorithm prevents openly confronting oppression.
This has its complications.
@seansvv Part 1 – “Algospeak” Thank you, @Taylor Lorenz for including my voice in your piece! #algospeak #censorship #discourse #anthropology #culture #language #adaptation ♬ original sound – SEAN
‘There’s a line we have to toe, it’s an unending battle of saying something and trying to get the message across without directly saying it,’ TikTok creator Sean tells The Washington Post.
‘It disproportionately affects the LGBTQIA community and the BIPOC community because we’re the people creating that verbiage and coming up with the colloquiums.’
As Sean touches upon, despite the free speech benefits of algospeak, its heightened use is leading to a rise in miscommunication.
This is especially true for LGBTQIA individuals, who have begun saying ‘cornucopia’ as an alternative to ‘homophobia’. BIPOC users, who are reportedly too nervous to mention ‘racism’ at all, are resorting to hand gestures instead.
Conversations about sexual health, violence, and harassment are also consistently down-ranked, an issue of growing concern among organisations providing support.
As they rightly stress, fighting an algorithm that already discriminates against sex-related content with ambiguity is not only detracting from the gravity of these subject matters, but dehumanising them as well.
![](https://www.washingtonpost.com/wp-apps/imrs.php?src=https://arc-anglerfish-washpost-prod-washpost.s3.amazonaws.com/public/46FSJAD6MJCL7BOFNX3NCQ5RBA.jpg&w=1200)
‘It makes me feel like I need a disclaimer because I feel like it makes you seem unprofessional to have these weirdly spelled words in your captions,’ says Kathryn Cross.
‘Especially for content that’s supposed to be serious and medically inclined.’
This raises the question of whether algospeak is quite as effective as users think. A recent incident involving Julia Fox would suggest that perhaps it isn’t.
For some context, it stems from the ‘mascara’ trend on TikTok.