Online Safety Act

The Online Safety Act has completed its passage through the UK Parliament and awaits only the King’s assent – a formality – before it becomes law. This will trigger an 18-month deadline for the newly appointed internet regulator, Ofcom, to create a series of codes of conduct which the Secretary of State will then present […]

3 min read
September 20, 2023
Share this article:

The Online Safety Act has completed its passage through the UK Parliament and awaits only the King’s assent – a formality – before it becomes law.

This will trigger an 18-month deadline for the newly appointed internet regulator, Ofcom, to create a series of codes of conduct which the Secretary of State will then present to Parliament. As and when those codes are approved a few weeks later, each element of the new regime covered by a code comes into force, ensuring the Online Safety Act is effectively implemented.

The strictest requirements apply to content that promotes terrorism and Child Sexual Abuse Materials. While these have long been illegal, the Act provides much stronger enforcement mechanisms, particularly for sites hosted abroad, as Ofcom will be able to demand that Internet Service Providers block access to sites that break this new law, in line with provisions in the Online Safety Act.

Perhaps even more effective is the regulator’s power to require that essential support services, such as hosting, search, payments and advertising, be withdrawn from non-compliant sites under the Online Safety Act.

“highly effective” age assurance

The highest profile change is to require “highly effective” age assurance of the sort offered by Verifymy’s age assurance solution, to prevent children from being exposed to “primary priority harms” as defined by the Online Safety Act.

This is a list that will be curated by Parliament and begins with four categories; suicide and self-harm information, dangerous dieting advice and pornography. The new Act also requires platforms to give all UK adults the option to avoid seeing such harmful content and other “legal but harmful” content such as bullying. Fraudulent ads and scams are also in scope where these appear on the most widely used social media sites.

These measures require services to spot many forms of harmful content on their platform in order to either remove it altogether or to prevent children from seeing it and offer the new opt-out to adults. Verifymy’s content moderation solution, provides this business-critical moderation through both automated scanning and human moderation. The risk is particularly acute when users create their own content, in live streaming and live chat, for example, where real-time monitoring is necessary. Services must comply with the Online Safety Act in this regard.

Online Safety Act risk assessment

All online services likely to be accessed by children must complete a risk assessment to understand which of the listed harms, and indeed anything else found on their site, could be harmful to children. The largest social media services must publish these documents, as mandated by the Online Safety Act.

The Minister in charge of the Act’s final Parliamentary stages, Lord Parkinson, stated clearly that 18 months was “a backstop not a target” so the new law could be in force before the next UK General Election, widely expected in October 2024, allowing the Conservative Party government to take credit for the new law, specifically the Online Safety Act.

These are some of the most challenging regulations in the world today, and it will take services time to select, procure and implement the technology needed to comply with the Online Safety Act.

There will be very strong pressure from the lobby groups that have driven this new legislation for Ofcom to act quickly to enforce the law under the Online Safety Act. These new measures are not a surprise, and the expectation set by Ministers is that websites will have been preparing for some time already, so it would be a mistake to expect an extended period of grace before heavy fines, site and business service blocking are imposed.

Our advice to all user-to-user services (the scope of the law, plus adult content sites) is to begin planning how to comply immediately, ensuring those plans are clearly documented, and to be as transparent about them as possible without undermining their operational effectiveness. Plans should describe the actions that will now be taken to bring a service into full compliance in good time before Ofcom begins its enforcement activity under the Online Safety Act.

About the author

Verifymy

Verifymy is a safety technology provider on a mission to safeguard children and society online.

Subscribe and keep up to date

Related articles

Regulation

On November 25th 2025, Ofcom published its new guidance on improving online safety for women and girls – marking a significant shift in how platforms are expected to design, govern and operate their services. While the Online Safety Act already holds platforms legally responsible for protecting UK users from illegal content and content that is […]

3 min read
December 4, 2025
Digital safeguarding

What is non-consensual intimate image abuse? Non-consensual intimate image (NCII) abuse sharing, also known as “image-based sexual abuse”, is the unauthorised sharing, distribution, or publishing of explicit images or videos of a person without their permission. It is not only a severe violation of personal privacy but also a form of harassment that can lead […]

4 min read
November 8, 2024
Online Safety Act

Since October 2023: What we know so far The Online Safety Act (OSA) received Royal Assent in October 2023 and mandates social media platforms and search engines to take action against illegal and harmful content, with a particular focus on protecting children.  The Act mandates a “duty of care” for online service providers, particularly those […]

3 min read
October 23, 2024
our solutions
industries
Company
resources