Regulation

Today marks a coming-of-age moment for the internet. With the release of Ofcom’s finalised Protection of Children Codes, thousands of online platforms – whether large, small, search-based, or social- now face a legal and moral responsibility to better protect the youngest and most vulnerable members of society. These codes, shaped by extensive consultation with over […]

3 min read
April 24, 2025
Share this article:

Today marks a coming-of-age moment for the internet. With the release of Ofcom’s finalised Protection of Children Codes, thousands of online platforms – whether large, small, search-based, or social- now face a legal and moral responsibility to better protect the youngest and most vulnerable members of society.

These codes, shaped by extensive consultation with over 27,000 children and 13,000 parents, are the most ambitious set of child safety measures introduced since the UK Online Safety Act came into force. They set a clear and actionable roadmap for how digital services must assess risks and put safeguards in place to create safer online experiences for children – with an important step forward confirming age checks are a must for the riskiest services.

Age assurance takes centre stage

At the heart of these changes is age assurance – the requirement for platforms to assess whether they are likely to be accessed by children.

Platforms must now implement highly effective age checks—especially those at medium or high risk of hosting harmful content. Whether it’s pornography, suicide, self-harm or eating disorders, the expectation is clear: services must prevent children from accessing harmful material. In many cases, this means configuring algorithms to filter out content or blocking access to entire areas of a platform altogether.

For services that claim to have a minimum age of use, the days of self- or tick-box age declarations are over. If they are not using strong age checks, they must assume children are using their service and act accordingly to deliver an age-appropriate experience.

This is no longer a “nice to have.” As Ofcom puts it:

“The riskiest services must use highly effective age assurance to identify which users are children – protecting them from harmful material, while preserving adults’ rights to access legal content. That may involve preventing children from accessing the entire site or app, or only some parts or kinds of content. 

If services have minimum age requirements but are not using strong age checks, they must assume younger children are present and make sure they have an age-appropriate experience.”

Key safety measures: What platforms need to know

Some of the key safety-first changes introduced in Ofcom’s new Codes include:

  • Robust age checks: The most high-risk services must deploy highly effective age assurance, preserving both safety and adult access to legal content.
  • Safer feeds: Algorithms must be reconfigured to reduce or remove exposure to harmful content in children’s recommendation feeds.
  • Swift action: Providers must act quickly when harmful content is flagged, with strong moderation processes in place.
  • More control and support for children: Tools to block users, disable comments, or avoid group chats must be easy to use. Support should also be provided to children who encounter harmful material.
  • Easier reporting and complaints: Terms of service must be child-friendly, and complaints processes must be straightforward and responsive.
  • Strong governance: Every service must name a person responsible for child safety, with risk reviewed at senior levels annually.

The clock is ticking

Platforms have until 24 July 2025 to implement these changes. From that point, Ofcom will begin enforcing compliance. Failure to meet obligations could lead to serious consequences, including hefty fines and, in extreme cases, court orders blocking access to non-compliant services in the UK.

This is not optional – It’s the new baseline

As today’s announcement shows, age assurance is becoming the foundation of online child safety. If a platform doesn’t know the age of its users, it simply cannot offer adequate protection.

“Knowing the age of your users is no longer optional – it is the baseline. Without this, platforms are effectively flying blind and hugely exposed to risk.” – Lina Ghazal, Head of Regulatory & Public Affairs, Verifymy. 

The good news? The technology already exists. Using privacy-preserving methods like email-based age checks, platforms can meet these requirements effectively—without compromising user privacy or experience.

Conclusion: From aspiration to expectation

Ultimately, protecting children is the priority – and in the digital world, you can’t do that without knowing their age. Ofcom’s new Codes of Practice make it clear: age assurance is no longer optional, particularly for the riskiest platforms. It’s the foundation for creating safer, more responsible online experiences.

Determining whether a user is a child is now critical. It allows platforms to target protections where they matter most – ensuring children are shielded from harmful content, while adults retain access to legal material. Age assurance measures will work in tandem with content moderation, service design, and user support, forming a holistic approach to online safety that makes it harder for children to stumble into danger – and limits the spread of such content in spaces they access.

Under Ofcom’s guidance, high-risk platforms must implement highly effective age checks. The good news is the technology is already here – ready to deploy, privacy-preserving, and scalable across services of all sizes.

Age assurance should become the norm – the new baseline for building safer online environments. This is more than a regulatory shift – it’s a cultural one. The internet is growing up – and with it, the responsibilities of those who build and run the digital spaces we all use.

At Verifymy, we’re committed to developing solutions that safeguard children and society online. Get in touch to learn how we can support your business and ensure compliance. 

About the author

Lina Ghazal

Lina is Head of Regulatory & Public Affairs at Verifymy, with over 10 years of experience working across media and tech, in both the public and private sectors — including at Ofcom, TF1, and Meta. Lina specialises in building impactful policy initiatives and partnerships, and has worked closely with regulators, industry leaders, and civil society across Europe, the Middle East, Africa, and the US to help shape the future of online safety.

Subscribe and keep up to date

Related articles

Regulation

Australia today becomes the first country in the world to introduce an outright ban on under-16s holding social media accounts, a move that has captured global attention. Governments from Europe to North America are already signalling that they are watching closely – not only to see whether the policy reduces online harms, but also to […]

4 min read
December 9, 2025
Regulation

Brazil recently took a significant step toward protecting minors in digital spaces. On September 17, 2025, the country’s president signed Bill 2628/2022, known as the Digital ECA (Estatuto Digital da Criança e do Adolescente). The law will take effect on March 17, 2026, and will be enforced by the ANPD, Brazil’s data protection authority. The […]

3 min read
October 20, 2025
Webinar

As part of our ongoing collaboration with Internet Matters, we recently hosted a webinar focused on one of the most important recent developments in online safety: the rollout of age checks across online platforms. The background Under the UK’s Online Safety Act, sites and apps that host or publish pornography must implement highly effective age […]

2 min read
October 1, 2025
our solutions
industries
Company
resources