Online Safety Act and the Balkanisation of the internet
Matthew Lesh argues that the censorious nature of the law, enacted in the name of child protection, was obvious from the start.
This is the first in a series of Substacks promoting debates at this year’s Battle of Ideas festival, which takes place at Church House in Westminster on 18 & 19 October. This will be the twentieth Battle of Ideas festival. For more information about the programme and speakers announced so far, visit the Battle of Ideas website. Early-bird tickets are still available here.
The US government is raising diplomatic concerns about the state of free speech in Britain. Virtual private networks (VPNs) – which can allow users to access online materials as if they were logging on from a different country – have surged to the top of the app-store charts as reports mount of political content disappearing from social-media feeds for users who have yet to verify their age. Global communities, like lgbtqia.space, have geoblocked their entire websites for UK users, while some small British user-discussion forums, like London Fixed Gear and Single Speed, have completely shut down. Wikipedia is threatening to pull out of Britain. Even articles in the Guardian are admitting ‘something’s gone wrong’.
It’s a wild turn of events, but for those who have been paying close attention to Britain’s Online Safety Act, including readers of this Substack, it should not come as much of a surprise. That’s because Baroness Fox, the director and founder of the Academy of Ideas, was one of an extremely small set of legislators who bothered to meaningfully scrutinise the Bill in Parliament. While most legislators shrieked, ‘Please won’t someone think of the children!’, Fox was one of the few who raised concerns about the implications for all our freedoms of this wide-ranging law.
There’s been quite a lot of superficial commentary about the Act since its passage. This post will try to get a little deeper into the weeds to scrutinise precisely why this has been happening in recent weeks. It is impossible to do the whole topic justice. The law itself is 353 pages of dense legal language, while Ofcom has already delivered over 2,000 pages of guidance and codes of practice, with much more still to come.
Instead, we’ll focus on the implications of the child-safety provisions coming into force, which have been some of the most visible consequences of the law.
Some have claimed in recent weeks that the impact of the Act is some sort of overreaction by Big Tech, perhaps even, as was said in this BBC report, a mischievous Elon up to his old tricks. There have also been some reports about Ofcom investigating platforms for excessive censorship. In reality, the censorious nature of the legislation was obvious from the start, and its critics eerily foreshadowed the precise consequences.
The new porn ban
The Online Safety Act (Part 5) requires any service that contains pornographic material (accessible to British users) to introduce age verification or estimation to prevent access by children. This idea was actually first introduced in the Digital Economy Act 2017. But it was then abandoned shortly after Boris Johnson became prime minister in 2019 over concerns about user privacy and practicality. The key proponents tend to come from the conservative side of politics, who worry about children’s access to adult material.
Forcing adults to prove their identity with passports, credit cards or selfies creates an unnecessary surveillance infrastructure, linking individuals to their most intimate habits and handing vast amounts of sensitive data to thousands of private companies. The scheme will inevitably expose users to data breaches, hacking and scams.
At the same time, the system is doomed to fail in its stated goal: teenagers will simply use VPNs (up 1,400 per cent since the law’s introduction), borrowed cards or alternative platforms, while many adult sites will refuse to comply, cutting off lawful access for British users. Instead of protecting children, the policy risks creating a false sense of security for parents and further infantilising the public, all while undermining the principle of individual choice and normalising state interference in the bedroom. Recent polling from the Children’s Commissioner indicates that children’s exposure to this adult material had increased before the Online Safety Act’s passage. Baroness Kidron, a key advocate for the law, has admitted that kids will inevitably use VPNs.
Protecting children
The Online Safety Act does not, however, just focus on pornography. Many users of platforms like X or Reddit will have noticed their access to all sorts of other content has been restricted by default since last month, when the child-safety duties within the law came into effect. This is the centrepiece of the law: provisions intended to protect children from harmful content. The Act (Chapter 4, Part 3) imposes expansive duties on user-to-user services (like social media sites, web forums, online games, etc) if they are likely to be accessed by children – a safe assumption for any online platform.
This includes duties to prevent children from accessing certain types of ‘priority’ content, and to take further steps to mitigate potential harm. This includes not only pornographic content but also violent material, material that encourages suicide, self-harm or eating disorders, and content involving child sexual abuse. There are further categories of priority harmful content, including abuse and bullying, hate content (based on protected characteristics like race, religion and sexual orientation), and glorification of dangerous behaviour. There is also an expectation that children should not be able to access age-inappropriate material that is likely to cause them psychological harm, even if not illegal.
Suffice it to say, there’s a lot here that children are not meant to access, even if it could perhaps have some educational value. It is an intriguing choice in the context of the government promising to extend voting to 16- and 17-year-olds.
There are also severe implications for adults. That’s because this material is expected to be hidden from users by default until the platform can substantiate that they are not children (through some sort of age estimation or verification). It is these provisions that explain why sites like X have been hiding descriptions of rape-gang trials and imagery from the Gaza and Ukraine wars, and even the video of police detaining a demonstrator at a migrant hotel protest.
Critics of X and other social media platforms have retorted that they have become overzealous in hiding posts, perhaps as some sort of Elon Musk-led rebellion against the law. But this fails to appreciate the incentives created by the law. Platforms can face fines of up to 10 per cent of global revenue – billions of pounds for the largest companies – for failure to comply with child-safety duties and other regulatory requirements. The expectations when it comes to free speech are relatively low, just a duty to ‘have regard’. It’s therefore no surprise that platforms are acting in a relatively conservative way, hiding any content that could potentially fall foul until they fulfil age verification requirements.
What’s next?
The child-safety requirements introduced by the law have already had profound consequences. For British users, the internet is becoming increasingly Balkanised. While our access remains broader than that of users in China or Russia, the reality is clear: the British internet is now significantly more restricted and tightly controlled.
And make no mistake, this is only the beginning. We are only starting to see the implications of provisions that mandate the automated removal of content for all users at a much lower than usual legal threshold, weaken end-to-end encryption, and grant sweeping new powers to ministers and Ofcom. The result is the dawn of a new, highly regulated era, one that promises to reshape the internet as we know it.
Matthew Lesh is country manager at Freshwater Strategy and public policy fellow at the Institute of Economic Affairs.
It is not enough to argue that the legislature should not have passed the Act: it was never safe for the legislature to have been allowed the power to do so in the first place.
Those in positions of power, no matter what political faction from which they come, will always have a strong tendency to do things which increase their power because power is a convergent instrumental goal: in other words, the more power that one has, the easier that it is to achieve any other goal, no matter what it is. Moreover, the more power that an individual or organisation has, the easier that it is to acquire yet more power - for precisely the same reason.
Thus, choosing between competing political factions will never be a sufficient protection against abuses of power nor against an exponential increase in concentrated coercive power in the hands of the state. In other words, democracy alone is inherently insufficient at preventing a decline into authoritarianism and, in due course, totalitarianism (gradual at first, then rapidly accelerating, as with every type of exponential growth).
Only actual and totally irreversible dissipation of state power (by mechanisms such as very rigorous separateion of the powers, greatly strengthening the rule of law especially as applied to the state, and curtailing in particular the power of the state to give itself more power and discretionary coercive power of all forms) will ever be enough to make people safe from arbitrary abuses of power in which politicians will trade off an arbitrarily large amount of harm to everyone else against an arbitrarily small increase in their chance of retaining power.
Only if power dissipation is near the top of a high proportion of voters' priorities in every election for the foreseeable future is this likely to happen. In other words, the only way of preventing an exponentially increasing decline into totalitarianism of which the so-called Online "Safety" Act is a very, very sinister part is to make power dissipation a high priority in public discourse over the long-term.