Menu Menu
[gtranslate]

Is age verification on UK pornography viewing a good idea?

The UK is once again attempting to pass age verification checks for porn viewers. The bill asks all upload-based websites to act with a ‘duty of care’ and remove harmful or illegal content. It may sound like a good idea in theory, but the parameters are also worryingly vague.

Pornography could be seeing a major overhaul in the UK soon.

The Online Safety Bill, first introduced in 2021, would require social networks and websites that allow user generated content to enact a ‘duty of care’. This means that all sites in the UK would have a responsibility to remove harmful or illegal content on a consistent and regular basis.

In addition, adult websites would be barred behind age-restriction identification policies. Users may have to prove they’re over 18 by using a credit card or a third party service.

Any websites that fail to comply with this policy could be blocked from UK access or face a hefty fine. It’s expected to be introduced to parliament at some point in the coming months.

On the surface, this seems like a sensible idea, right? Pornography websites would no longer be available to anyone and everyone on a whim, there would be better regulation of lude content, and more barriers in place to ensure that internet surfing is safer – at least in theory.

The problems lie in the vague terminology used throughout much of the legal jargon that surrounds this proposal. Digital rights activists are concerned that it could erode free speech online significantly, and create an ideal environment for state-level censorship to thrive across all of the internet.

So, is this new bill actually a good idea? Let’s take a closer look.


Why is it a good idea?

First, let’s look at the motivations for creating these new laws and the potential positives.

A key reason for the bill is to protect children from viewing explicit material accidentally, or being exposed to pornography excessively.

While research into the long-term effects of porn use is limited, it has been found to skew sexual expectations and sometimes lead to extreme fetishes. Users can also feel negatively about themselves, neglect other areas of their life, and become more aggressive.

Putting an age-restriction system in place would reduce porn consumption across the board – though this isn’t inherently a good thing – and more crucially prevent children from watching inappropriate content.

Another area of concern is misinformation. This new bill would make ‘knowingly false communications’ a legal offence – meaning articles or podcasts that willingly spread incorrect facts could face prosecution.

The intention here is to create less confusion and prevent worrying trends such as anti-vaccine movements and flat Earth groups from gaining even more momentum. Echo chambers and bot accounts continue to create dangerous narratives online that disrupt democracy and undermine our political systems.

By implementing some kind of legal repercussion, we may be able to see a reduction in poor information online.


What are the potential problems with this new rule?

Despite these seemingly sound reasons for introducing a bill, there are multiple issues with the legal terms used and the potential implications.

Digital rights groups, for example, argue that these new rules could wind up making the internet a less safe place. The Internet Society explains that the bill ‘will force service providers to weaken or remove encryption to meet new content identification requirements’.

Content moderation would mean less end-to-end encryption and worse security for users. Leaks and huge personal information dumps remain a large enough issue as it is – weakening already shaky infrastructure may lead to serious problems.

In addition, the Digital, Culture, Media and Sport Committee criticised the bill, stating that it ‘neither adequately protects freedom of expression nor is clear and robust enough to tackle illegal and harmful content’.

There is also the tricky question of free speech. How does a company, brand, or even government decide what is ‘threatening’, ‘illegal’, or ‘harmful’? These terms are subjective and without concrete guidelines, the ability to post freely and authentically could become very difficult.

We’ll have to see in the coming months how this new bill will work, both logistically and practically. For now, it’s best to keep your online activity as safe as possible via your own means – rather than relying on legislative protection.

It may be a while before anything comes into practice.

Accessibility