Parag Agrawal, the new CEO of Twitter, proposed this new change on his very first day.
Announcing the new guidelines, Twitter wrote: ‘When we are notified by individuals depicted, or by an authorized representative, that they did not consent to having their private image or video shared, we will remove it.’
Obviously, there’s a couple of exceptions to this new feature.
‘This policy is not applicable to media featuring public figures or individuals when media and accompanying tweet text are shared in the public interest or add value to public discourse,’ the announcement continued.
Accusations of free speech censorship immediately followed, particularly from American conservatives. Democrats, on the other hand, are arguing that misinformation and extremist content are the real, justified targets of this new policy.
In attempts to protect the new CEO – it’s only his first week on the job, after all – Twitter emphasised that the idea was a suggestion put forward by the company’s content moderation trust and safety team. But naturally, Parag Agrawal is bearing the brunt of critique.
At face value, many still feel the announcement still felt too vague – is all content going to be judged for its newsworthiness or ‘public interest value’? How will this be measured? And how can moderators avoid implicit bias?
If Twitter is simply striving to use its new policy to protect individuals in places where national laws don’t, then there shouldn’t be an issue.
There is already a law in place that states some individuals, countries, or legal institutions can request to have photos of themselves removed from online spaces, and that this wish must be fulfilled. This already happens in Europe with members of the EU.
Where it gets tricky is any intended use outside of standard legal protection. Should it be enforced as a means of stamping out fake news or radicalised content, then it may get a little trickier for people to accept.
Any disruption to social media echo chambers can be volatile – and intense criticism is always expected. Keep in mind though that the announcement is still fresh, and more questions about the ins and outs of the new policy are likely to follow soon.
As for Twitter’s content moderation team, let’s hope they remain impartial as they sift through content, determining what is and is not worthy of contributing to ‘healthy debate’.
Until the policy is enforced, we’ll have to wait and see what notable changes occur going forward.