Protecting children from online harms: Navigating Ofcom's recent consultation
Ofcom has just published for consultation its draft codes and associated guidance for protecting children from harms online, the key objective of the Online Safety Act 2023. Given the volume of material, in this article we provide a brief overview of the proposals and highlight how VerifyMy can support services in scope - of all types and sizes - to achieve compliance with these demanding new rules, with the minimum cost and time.
The regulator has distilled this into 1,336 pages in 14 documents, giving stakeholders just over nine weeks to provide feedback, by 17 July. It will consider the responses it receives and may amend the documents before they are published next spring, meeting the 18-month deadline set by the legislation of 26 April 2025 for this, alongside codes of practice on terrorism, Child Sexual Exploitation and Abuse, and illegal content. Ofcom has so far delivered against the timetable it set out in its published roadmap, so the indications are this Act will take effect less than a year from now as planned. It is unlikely there will be dramatic changes to these documents as a result of the consultation, so all affected digital services need to factor the consequential preparations into their development plans now.
The guidance explains what you must do if you are a digital service likely to be accessed by children. Step 1 is to determine whether or not you are such a service. If you are, step 2 is to complete a children’s risk assessment to identify any risks your service poses to children, drawing on Children’s risk profiles that Ofcom is also supplying.
Step 3 is more explicit. You must in all cases prevent children from encountering the most harmful content, termed “primary priority content” - relating to suicide, self-harm, eating disorders, and pornography. You must also minimise children’s exposure to other serious harms defined as “priority content” - including violent, hateful or abusive material, bullying content, and content promoting dangerous challenges.
You can then consider whether you need to implement up to 40 safety measures which fall into five categories - and VerifyMy can help deliver across the board:
- Robust age checks: Much greater use of age assurance, so services know which of their users are children. All services which do not ban harmful content, and those at higher risk of it being shared on their service, should implement highly effective age-checks.
We offer a wide range of highly accurate age verification and age estimation methods that minimise friction and ensure compliance.
- Safer algorithms: Configured to filter out the most harmful content from children’s feeds and reduce the visibility of other harmful content.
Our global network of content moderation experts, artificial intelligence and machine learning enables you to review and moderate content (videos, live streams and images) to ensure inappropriate or illegal content is eliminated and prevent it from being shared further, or published in the first place.
- Effective moderation: Swift action against content harmful to children and a ‘safe search’ setting in which children should not be able to turn off.
As well as our highly effective automated moderation tool, our team of human moderators are on hand to review complex cases, reviewing all content flagged for review during AI content moderation.
- Strong governance and accountability: A named person is accountable for compliance; an annual senior-body review; and an employee Code of Conduct.
VerifyMy’s compliance dashboard provides assurance to all those responsible for delivering a safe online experience for children that the measures are operating effectively, and can alert senior management rapidly to emerging risks.
- More choice and support for children: Accessible information for children and easy-to-use reporting and complaints processes.
Our customer complaints management system allows website users to report content, swiftly flagging high-risk content which will be reviewed by a dedicated content moderation team, ensuring swift resolution.
While there are frequent references to the need to be proportionate, Ofcom has clarified that services cannot decline to take steps to protect children merely because it is too expensive or inconvenient – even the smallest will have to take action as a result of the proposals.
The Technology Secretary Michelle Donelan has urged big tech to take the codes seriously. "To platforms, my message is engage with us and prepare," she said. "Do not wait for enforcement and hefty fines – step up to meet your responsibilities and act now”. Therefore, the message is clear. The time to act is now, and collaboration is key.
Robust technologies are available to ensure children have the age-appropriate experiences online that they deserve. VerifyMy is here to help platforms implement these technologies effectively, to ensure compliance and to help safeguard children and society online.