Bearing in mind that I am a developer and NOT A LAWYER, it is my understanding that if:
- You host a discussion forum.
and
- You, or any of your users, are based in the UK.
Then the UK government considers you subject to the UK Online Safety Act. Even if your forum is about hamsters. Failing to comply could get you a fine of up to £18 million or 10% of the company’s global revenue, whichever is greater.
That got your attention, didn’t it?
How can you be subject to UK law if you don’t have a presence in the UK? Good question. Are they going to extradite you or grab you if you come to the UK on holiday? I have no idea. Isn’t it all a bit over the top for a small customer forum? I think so. What if every country starts trying to apply their laws to citizens in other countries? Bigger yachts for the partners in law companies, I guess.
If you are outside the UK, you will have to make your own decision about whether to care about this law. But I am based in the UK. So I definitely need to care about it – even though my Discourse forum is a highly moderated, technical forum about data cleaning, transformation and analysis, with no porn, violence or gambling content.
There are hundreds of pages of guidance about this new law, which covers massive companies, such as Google and PornHub, as well as my little company. But my understanding is that the minimum a forum owner needs to do is:
- Have an online safety policy
- Assess the likelihood of children accessing your site
- Assess the risk of harmful content
- Take appropriate measure based on the above
- Regularly update your assessments
Depending on your assessment and the type of content, you may need age verification checks or other measures. I assess that my data wrangling forum is not attractive to children and very unlikely to have harmful content, so I have not gone through the massive ball-ache of adding age verification.
For good measure, I have also disabled the ability of customers to direct message each other. As that is something I can’t moderate and don’t want to be responsible for.
This is 2025, so I generated my online safety policy and assessments with some help from Microsoft CoPilot. Feel free to use them for some inspiration if you need to generate your own policy. But please don’t copy them word-for-word.
Incidentally, generating documents that none of my customers will ever read seems the perfect use case for LLMs.
I really hope I don’t get forced to add age verification. I would rather shut down the forum.
For more details, start with the official Online Safety Act Explainer.
I see Wikimedia is pushing back against the UK government. It will be interesting to see how that goes.
If you think I have got anything wrong here, please post in the comments.
** Update: 29-Jul-2025 **
The Online Safety Act probably also applies to this blog. Oh boy…

Our government is dumb. This is an overreach that fails to grasp the real-world implications. Requiring ID verification for all users is both logistically unfeasible and potentially harmful to privacy. For many smaller websites, the financial burden of implementing age verification systems is prohibitively high. A more realistic solution would be device-based identification, not something that jeopardizes privacy by requiring sensitive personal information.
I would lobby my elected representative to advocate for a practical alternative, but she is a government lackey and an empty shell that went along with this measure without any critical evaluation.
My understanding is that you don’t require ID/age verification for every forum/blog. Only those likely to expose children to harmful content or anyone else to illegal content. Because of the subject matter and strong moderation I don’t assess this to be an issue for the Easy Data Transform forum.
That’s true in that not every forum or blog is automatically required to implement ID verification, though many could still be subject to the this requirement based on their content and user interactions.It wouldn’t be limited to pornography or gambling as many seem to think, but any kind of service “they” deem poses a risk to children or exposes to illegal or harmful content, which is a wider net. Hypothetically, could this includeForums discussing mental health, self-harm, eating disorders, drug use etcPlatforms that allow messaging, commenting, or uploading images/videos
Video sharing sites, even if the content isn’t explicitly adult, but include violences or strong language.
Fan fiction or roleplay sites with mature themes.
Educational platforms that allow free discussion but lack moderation.Etc, etc. The problem I have with it is that the Act expects even small operators to perform risk assessments, use an age or identity verification toolif kids might access it (these things aren’t cheap?), enforce content moderation policies, keep records, report incidents, cooperate with Ofcom – all that palaver.Also the terms “likely to be accessed by children” and “harmful content” are vague and lack thresholds.
The act does seem rather over the top. However most of the people reading this blog will have forums/blogs with no adult content that are moderated, so it is mostly a box ticking exercise for them.
Will the act actually prevent any harm causes by online content? I’m not convinced.
Section 12(4) seems to require age verification to protect children only against “primary priority content” (and as mentioned, my understanding this is not required if the terms of service ban such content, but then I suppose strong moderation is needed anyway).
“Primary priority content” is not content simply about self-harm, eating disorder, suicide (in addition to porn) but content that specifically “encourages, promotes or provides instructions” for these topics (definition in section 61).
So it looks like, and I am not a lawyer, that this does not include forums providing support and help on those topics.
I am only really considering things from the point of view of people like me: owners of heavily moderated technical forums/blogs. I am sure it gets a lot trickier if your forum/blog addresses issues like self-harm.
Regarding age verification, unless you’re running a service that deals in “harmful content” like, for instance, porn, I believe that the Online Safety Act states that you don’t need any as long as your terms of service explicitly prohibit such harmful contents (section 12(5) ).
If that’s indeed the case, it is just a box-ticking excercise.
Anyone located outside of the UK can block access to their site from the UK, just so that the lawmakers get some pushback from the community.
WikiMedia foundation should IMHO shut down access to Wikipedia in the UK as well.
Until this nonsense is rectified by amending the law to make it practical and useful, a boycott would be very effective I think. WikiMedia folks surely know how much traffic they get from all government levels in the U.K. Had they shut down their services in the U.K., stuff would get happening real fast real quick.