Is hate the price we must pay for a connected world? We analyse the connection between social media and extremism.
As the world continues to reel from the heart-breaking events in Christchurch on 15th March, Facebook has announced changes to its policies regarding racist content.
Previously, Facebookโs algorithm hunted out only explicitly white supremacist content – posts that actively promoted the belief that white people are superior to other races – whilst regarding white nationalist and white separatist content – aka the belief that races should live separately, or that a white majority should control predominantly white countries – as legitimate. As of next week, Facebook will no longer draw a distinction between the two.
The confirmation that white nationalism cannot be meaningfully separated from hateful racism is long overdue from Facebook. The company have finally put to bed the unconvincing argument that โnational prideโ means the same thing coming from white people as it does from non-whites given the cultural dynamics of our planetโs racial history.
The announcement follows accusations of insufficiently filtered content on Facebook, Reddit, and YouTube. Many argue that, had these platforms employed the same level of regulation for white supremacist content as they did with Islamic extremist content, the Christchurch shooterโs rampage would never have been allowed to spread.
Each of these websites have since expressed their determination to eradicate any trace of the shooterโs attack. YouTube tweeted immediately after the attack, โPlease know we are working vigilantly to remove any violent footage.โ
Our hearts are broken over todayโs terrible tragedy in New Zealand. Please know we are working vigilantly to remove any violent footage.
— YouTube (@YouTube) March 15, 2019
Itโs unquestionably true that social media outlets, in particular Reddit and content sharing boards like 4chan, have yet to adequately grapple with the role they have in facilitating radicalisation and recruitment. But what is becoming increasingly clear is that tech companies now have a content moderation problem that is fundamentally beyond the scale that they know how to deal with.