
The United Kingdom serves as a warning of what the current governmental trend of regulating the Internet may lead to. The country – potentially the West’s canary in the coal mine on Internet censorship ever since the criminalisation of “offensive” or “indecent” online messages with the Communications Act of 2003 – is facing widespread internal protests against the Labour government’s “Online Safety Act”; a legislation superficially attentive to the needs to protect minors from adult content and communication, but which has a number of other nefarious effects baked into it. One criticism is the effects the law has on regular Internet users’ privacy, and that it prohibits the sharing and discussion of information regarding sensitive political topics, most notably immigration, on social media.
In Britain, the heated debate around the new law has resulted in harsh rhetoric between the political camps. Those opposed to the law have been smeared as siding with child predators, while those for are accused of insidious purposes that have nothing to do with children, and have more to do with control of the Internet. Keir Starmer’s Labour government is however committed to not walking back on the Act, despite popular petitions and criticism from media and interest organisations.
European laws can and do slip into absurdity
One of the common concerns over censorship laws is the “slippery slope” argument. That interpretations of such laws will inevitably slip to encompass more than the explicit target of the legislation, due to the dynamics of language.
Is one inciting hatred against a group just because one is discussing a particular negative phenomenon in relation to that group? The requirement to first define a group and secondly justify its need for legal protection, are also areas that are rife with controversy and which inevitably will lead to arbitrary inclusion and exclusion. The purpose of a “hate speech” law to protect the general population from undue harassment has suddenly devolved into an instrument for lawfare that one group can use against another.
This is for example what happened in Sweden earlier this year, when the Islam critic and Quran burner Salwan Najem was convicted for incitement of hate towards Muslims, just because he was an active part of a Quran burning manifestation – without actually verbally targeting Muslims as a group, and focusing his actions on a particular book and the religion of Islam.
It is hard to imagine in the current climate of European politics that laws like these are applied equally, as statements and criminal actions explicitly targeting Swedes for their ethnicity are not prosecuted as hate crimes in Sweden. Another recent example that sparked much debate in Sweden was the decision not to prosecute a number of pro-Palestinian activists for comparing civilian casualties in Gaza to historical Nazi murders of Jews, by using dummies in gallows in order to evoke the Holocaust. While debatable whether the dummy installation truly constituted a hate crime or not, the case was never brought to court, which has polarised public opinion. Is the law against incitement truly tried equally to protect all groups from slander and threats, as its defenders claim?
This illustrates the problem with censorship laws that has been identified by primarily the right wing, likely because it is mostly conservatives who are on the receiving end of questionable verdicts.
The United Kingdom has its fair share of arbitrary and unfair “hate speech” incidents already, most of them taking place on social media, and enabled by the 2003 Communications Act. But the Online Safety Act is something different. It preemptively prevents information from being viewed by others, before any exposure to “harmful” content has taken place. This means that it will be harder to notice if the law has been illegitimately applied to minimise the spread of a certain topic or type of content.
Censorship of immigration discussion?
Many critics have called the Online Safety Act an out-right attempt at political censorship, specifically due to the wording in one of the government’s own explainers, where the concerned sensitive topics are listed. One of the bullet points is “racially or religiously aggravated public order offences”, and another is “illegal immigration and people smuggling”.
This has been interpreted by many critics as meaning that these topics are forbidden for minors, and as such discussions about them must only be accessible to verified adults. Is the goal to keep British minors in the dark about illegal immigration and racial and religious conflicts?
A Yahoo News fact check identified that in the full legal text itself, it is not content relating to illegal immigration itself that is age-regulated, but in actuality content relating to assisting illegal immigration. While that may work to defuse some of the concerns, the full law’s wording regarding racial or religious public order offences is not so clear. What instead is inferred is that any online content that violates the laws against racially or religiously aggravated public order offences is to be subject to age restriction.
This is complicated for the reasons discussed earlier. These “hate crime” laws have often been criticised for being skewed in both their legislative process and in court praxis, to favour the grievances of some groups, while ignoring others. General public trust in these laws to be applied fairly is low, especially in the United Kingdom after multiple incidents just the past year, where incitement charges brought against persons participating in or commenting on the anti-immigration riots of the summer of 2024 have received controversially long prison sentences.
That these laws are to be used to brand online content preemptively also creates a situation where in order to reduce the risk of a government reprimand, online platforms must limit discussions where public offences of this kind are likely to be suspected. This will negatively affect the freedom to discuss immigration and the relation between native Brits and immigrants on the Internet, when all such channels must be strictly kept inaccessible to potential minors – or “unverified” adults.
Forced identification to use online platforms
As per the stated intention to prevent minors and children from being exposed to offensive and harmful online content, the new law has resulted in a multitude of major platforms implementing age verification on users based in Britain – and on occasion, spilling into EU countries. X users across Europe have gotten a taste of how the new law replaces certain posts on the website with a disclaimer about how the site is “determining” the user’s age. Presumably by analysing its content preferences. Youtube announced that they are rolling out a similar feature next year, which will have its trial run in the United States. Even Spotify has been forced to accommodate the new ordinance from the British government, by deleting accounts that fail to verify their age.
Other websites and online services have gone so far as to demand facial identification, either in the form of submitted ID cards or through photography. Banal platforms for chatting and gaming have ramped up their security to near-banking levels, all in the name of making sure minors do not see inappropriate content.
The threat to online privacy that this all poses is significant. The Online Safety Act can in practice be compared to the so-called Chat Control 2.0, that the EU was planning to roll out last year, and which would require AI analysis of all of Europe’s online chat content – even on encrypted platforms – to identify material that may constitute sexual exploitation of children. Not only was the right to private communication severely under threat, but the risk of false positives was also raised as a major concern. It is simply too hard to cast such an infinitely wide net and not undermine the rights to privacy of ordinary, law-abiding citizens.
Some platforms, the inclusion of which in the Online Safety Act’s scope is itself a contentious topic, such as Wikipedia, are challenging the imposition of user verification in the United Kingdom. According to Wikipedia, the Act endangers the anonymity of its millions of volunteer contributors, the anonymity that protects their freedom of speech, and from being persecuted by various regimes for their activities in the name of freedom of information.
In fact, not so dissimilar to the situation under various non-democratic regimes, many Brits have now taken to using virtual private networks, VPNs, to bypass the restrictions that a British IP address forces on them. And also eerily similar to a non-democratic regime, the Labour government has expressed its will to now ban VPNs to enforce compliance – ironically despite VPNs being promoted by the United Kingdom’s National Cyber Security Centre as a method for companies to avoid hackers, as some critics have pointed out.
In general, the controversial Act will have far-reaching ramifications for Internet culture, built over decades of grassroot engagement and a very special kind of online social trust. And it will not just affect the United Kingdom in the end, but the whole world.
The political push for Internet regulation in Europe
Age restrictions and verification to view online content are not just a novel demand from 10 Downing Street. It has been proposed with various justifications all across Europe, and can probably be traced back in other forms to the early days of the Internet as a popular technology.
In current-day Sweden, the wheel has been set in motion to impose age restrictions on social media as a result of organised crime gangs using certain platforms to recruit children for dangerous illegal missions. As such, there it is not so much a concern over inappropriate material such as adult content, political extremism or crime being shown to minors, but instead a matter of who children may get in contact with on social media. There is also, like in the rest of the West, much debate about how unhealthy body ideals are promoted on social media, and how this affects the mental health of young people.
In the midst of governments, or the EU, gauging the public’s reception of various types of restrictions on the Internet, the social media platforms themselves have of course always practiced a form of house keeping where they want to stay clear of liability in case their services are accused of negatively affecting the youth. Many popular websites have had soft age requirements for many years, but it is rare that they force their users to actually verify themselves.
Generally it is poor marketing for web services or video games to demand that their users surrender their anonymity, which is taken for granted on the Internet. In a sense, it can be expected that websites and social media platforms are then willing to comply with new laws that require this of them, since it solves their problem of potential liability while excusing them from infringing on their users’ sacred privacy. It is the government that is asking this of them, after all.
This could explain why despite the Online Safety Act’s worrying implications, no tech giants have joined the masses demanding the law be reverted for the sake of free speech, even though it would ostensibly line up with the tech industry’s legacy ideals.
With the new system in place in the United Kingdom (for the rest of Keir Starmer’s term, at least) and the EU still on the offensive regarding Internet regulation, it is likely that the rest of Europe are going to be subjected to similarly overbearing legislation in the near future. This has to be met with the same indignation as it currently is in Britain.
An alternative solution
The uplifting fact in the potentially impending cyber darkness that Europe faces, is that the Internet fundamentally cannot be controlled. Various technologies or ways to trick unfair surveillance exist, and it will be hard for a democratically accountable system, whether a national government or the European Commission, to eliminate them all. But what happens is that trust in governing institutions will be further eroded with each infringement attempt, as the online information sharing experience that is so vital for 21st century humans becomes more and more tense.
Ultimately, trying to repress technology will only create an endless goose chase for the government, constantly having to rethink and repair the restrictions they have in place due to the inherent dynamic nature of it all.
If one truly wants to protect children from harmful influences, it is better not to regulate the software, but instead the hardware; a serious proposal aimed at reducing unhealthy and unproductive screen habits could be to impose age regulations on the purchase of smartphones. This would make minors’ Internet access less spontaneous, and potentially limit it to more controllable environments, such as the home or school. The incidence of youths being potentially harmed by unsuitable content on social media, which more or less exploded simultaneously with the smartphone, is certain to go down – all while preserving the sanctity of the Internet as a place where free-thinking and responsible citizens can still share and discuss all the ideas they want without censorship or compromised anonymity.
But perhaps, no government-penned law should be allowed to replace the greatest tool to prevent harmful content from reaching children – parental responsibility.