The Online Safety Act has drawn criticism from across the political spectrum, with opponents warning of burdens on small forums, threats to privacy, and risks to free expression. A closer look suggests that while these concerns have substance, they are being overstated, writes Sam Gilbert.

More than half a million UK citizens think the government should repeal the Online Safety Act, along with an eclectic mix of politicians, business leaders and digital rights organizations ranging from Nigel Farage to Marc Andreesen to Big Brother Watch. What is it that they dislike about it, and is there anything to their objections?
The focus of a recent petition to repeal the Online Safety Act is its impact on online forums, which it argues cannot reasonably be expected to bear the costs of compliance. Forums are within the scope of the Act because they count as “user-to-user services” where people can make public posts and reply to posts by others, and the Act applies to them regardless of their scale. As the government’s response to the petition points out, the absence of a scale threshold is a feature not a bug: they would not want small forums focused on topics like suicide and self-harm, or more innocuous forums where adults and children can interact, to be exempted.
The closure of benign-sounding forums like Charlbury in the Cotswolds, London Fixed Gear and Single Speed, and The Hamster Forum has been widely reported in media coverage of the Online Safety Act. I wanted to understand how onerous the burden of compliance really is, so I used the resources provided by Ofcom to re-create the process forum operators are supposed to have gone through, starting with the Regulation Checker tool.
I found the user interface design of these resources confusing, with no clear primary call-to-action, while hyperlinks led me round in circles between the pages “Guide for services: complying with the Online Safety Act” and “Check how to comply with the illegal content rules”. Together with arcane terminology (e.g. “characteristics of a service”, “functionalities”), this made it difficult to understand forum owners’ responsibilities under the Online Safety Act and what they need to do to discharge them. My heart sank at the prospect of reading the documentation (e.g. the 84-page “Risk Assessment Guidance and Risk Profiles”) and completing the mandated forms (e.g. the 33-page “Illegal Content Duties Record-Keeping Template”). Overall I sympathized with part-time and voluntary forum owners who felt it was too much.
In practice, however, and contrary to the impression given by media reporting, only a small number of forums have actually shut down – 22, according to the tech lawyer Neil Brown, who has been documenting them. Having temporarily closed, London Fixed Gear and Single Speed and The Hamster Forum are now back online. There are also forums that have completed the risk assessment required by Ofcom and published the results (here is an example from the books forum Rambling Readers). So, while there is certainly scope to lighten the bureaucratic load on forums, it is questionable whether the effort to do so would be justified. Furthermore, as Ofcom is legally required to act with proportionality, unless The Hamster Forum is actually being instrumentalized in harmful ways, the chances of enforcement action being taken against it are vanishingly small.
The Online Safety Act requires service providers to verify the age of users accessing age-restricted content – particularly pornography. It does not prescribe the means, leaving it up to service providers to decide whether to rely on payment card data, estimate users’ age with facial scanning, or insist that they upload identity documents showing their date of birth.
This is of concern to privacy campaigners because it empowers providers of age verification software to capture extraneous personal data – driving licenses and passports, for example, contain more information about users than just their date of birth. They also worry that large repositories of images of identity documents could become a target for cybercriminals, potentially exposing users to identity fraud.
But if users do not trust age verification software providers to treat their data responsibly, we should surely expect a market response in the form of new privacy-preserving age verification products. Denmark’s MitID system provides a glimpse of how these would look in practice. MitID is used to manage access to the majority of online services from self-assessment tax to online banking, but also for age verification. As an adult wanting (say) to buy a case of New Zealand Pinot Noir from an online wine merchant, you log in to MitID, then MitID confirms to the wine merchant that you are over 18. There is no need for MitID to disclose anything else about you – including your age or date of birth – to the wine merchant, who can simply trust the response from MitID. In technical jargon this is known as a “zero knowledge proof” – the wine merchant can know you are over 18 with “zero knowledge” about who you are (albeit you might want to tell them if you want your Pinot Noir to be delivered).
One complication is that MitID depends on Det Centrale Personregister (CPR) – the centralized register of everyone who lives in Denmark. There has been no prospect of developing an equivalent in the UK since the repeal of the Identity Cards Act in 2011 and the concomitant destruction of the nascent National Identity Register. Developing zero knowledge proofs for age verification in the UK would therefore require an alternative approach – for example, by relying on previous age checks carried out by mobile network operators and using phone SIMs to send a cryptographic token to the age-restricted service.
A different objection is that age verification requirements as currently implemented are ineffective. Age gates can be bypassed with the use of a Virtual Private Network (VPN) – widely-available software which masks the user’s geographic location from the service provider. Many commentators noted the surge in demand for VPNs in the period immediately after the Online Safety Act came into force.
But the challenge presented by VPNs is not unique to services required to comply with the Online Safety Act. Rights-holders who license content to streaming platforms like Netflix and Amazon Prime expect the platforms to block access to users in locations where the licenses do not apply. Over time, streaming services have got better at identifying traffic from VPNs, by checking databases of IP addresses rented by VPN providers and using algorithms to flag online behaviour patterns suggestive of VPN use. It would be reasonable if Ofcom had the same expectations of the services within the scope of the Online Safety Act (or at least of the major platforms). Detecting VPN use is a cat-and-mouse game, and users who were determined enough to try multiple VPNs to find a way around age-gating would likely succeed. But age verification does not have to be anything like 100% effective to support the Act’s aim of reducing instances of online harm.
The Online Safety Act creates strong economic incentives for social media platforms to err on the side of caution when it comes to content moderation. Free speech advocates claim this is a form of censorship which is incompatible with the values of a liberal society.
In our 2023 policy brief on the Online Safety Bill, the much-missed Ross Anderson and I argued that the over-policing of social media posts would be a predictable consequence of giving tech platforms legal duties to protect their users from harm. However, we also argued that such over-policing was a price worth paying to mitigate the serious harms the Act seeks to tackle, and that describing it as illiberal is incorrect.
This is partly because maximising overall freedom in any society involves constraining the ability of the strong to inflict cruelty on the weak. But it is also because having one’s post removed from a particular online platform – or even being permanently banned from posting on that platform – does not meaningfully impinge on one’s speech rights, since one remains free to post on other platforms, or build a platform of one’s own. It is not providers of social media services who have the power to prevent individuals expressing themselves online, but the providers of infrastructure services like hosting and cloud security – and they are outside the scope of the Online Safety Act.
The Online Safety Act is imperfect legislation – it imposes bureaucratic burdens on small forum operators, diminishes online privacy, and incentivizes social media platforms to take down legitimate speech. However, the extent of these problems has been exaggerated in media and public discourse, and there are reasons to believe they will become less salient over time as actual instances of enforcement by Ofcom are reported and as market actors respond to the conditions the Act has created. It is the first serious attempt by a government to deal with the panoply of harms the web has engendered, and at a time when tech company CEOs are abandoning many of the commitments they had previously made to self-regulation, it would be folly to roll it back.
The views and opinions expressed in this post are those of the author(s) and not necessarily those of the Bennett Institute for Public Policy.