UK Passes Online Safety Act
Follow VerifyMy on LinkedIn to keep up to date with the latest safety tech and regulatory developments.
The Online Safety Act has completed its passage through the UK Parliament and awaits only the King’s assent – a formality – before it becomes law.
This will trigger an 18-month deadline for the newly appointed internet regulator, Ofcom, to create a series of codes of conduct which the Secretary of State will then present to Parliament. As and when those codes are approved a few weeks later, each element of the new regime covered by a code comes into force.
The strictest requirements apply to content that promotes terrorism and Child Sexual Abuse Materials. While these have long been illegal, the Act provides much stronger enforcement mechanisms, particularly for sites hosted abroad, as Ofcom will be able to demand that Internet Service Providers block access to sites that break this new law.
Perhaps even more effective is the regulator’s power to require that essential support services, such as hosting, search, payments and advertising, be withdrawn from non-compliant sites.
The highest profile change is to require “highly effective” age assurance of the sort offered by VerifyMy’s age assurance solution, VerifyMyAge, to prevent children from being exposed to “primary priority harms”.
This is a list that will be curated by Parliament and begins with four categories; suicide and self-harm information, dangerous dieting advice and pornography. The new Act also requires platforms to give all UK adults the option to avoid seeing such harmful content and other “legal but harmful” content such as bullying. Fraudulent ads and scams are also in scope where these appear on the most widely used social media sites.
These measures require services to spot many forms of harmful content on their platform in order to either remove it altogether or to prevent children from seeing it and offer the new opt-out to adults. VerifyMy's content moderation solution, VerifyMyContent, provides this business-critical moderation through both automated scanning and human moderation. The risk is particularly acute when users create their own content, in live streaming and live chat, for example, where real-time monitoring is necessary.
All online services likely to be accessed by children must complete a risk assessment to understand which of the listed harms, and indeed anything else found on their site, could be harmful to children. The largest social media services must publish these documents.
The Minister in charge of the Act's final Parliamentary stages, Lord Parkinson, stated clearly that 18 months was “a backstop not a target” so the new law could be in force before the next UK General Election, widely expected in October 2024, allowing the Conservative Party government to take credit for the new law.
These are some of the most challenging regulations in the world today, and it will take services time to select, procure and implement the technology needed to comply.
There will be very strong pressure from the lobby groups that have driven this new legislation for Ofcom to act quickly to enforce the law. These new measures are not a surprise, and the expectation set by Ministers is that websites will have been preparing for some time already, so it would be a mistake to expect an extended period of grace before heavy fines, site and business service blocking are imposed.
Our advice to all user-to-user services (the scope of the law, plus adult content sites) is to begin planning how to comply immediately, ensuring those plans are clearly documented, and to be as transparent about them as possible without undermining their operational effectiveness. Plans should describe the actions that will now be taken to bring a service into full compliance in good time before Ofcom begins its enforcement activity.
See VerifyMyAge or VerifyMyContent for more information on our Online Safety Act compliance solutions.