Regulation

Australia today becomes the first country in the world to introduce an outright ban on under-16s holding social media accounts, a move that has captured global attention. Governments from Europe to North America are already signalling that they are watching closely – not only to see whether the policy reduces online harms, but also to […]

4 min read
December 9, 2025
Share this article:

Australia today becomes the first country in the world to introduce an outright ban on under-16s holding social media accounts, a move that has captured global attention. Governments from Europe to North America are already signalling that they are watching closely – not only to see whether the policy reduces online harms, but also to understand the practical reality of enforcing such a sweeping change at a population scale. The stakes are high: the ban is expected to impact an estimated 2.8 million young people, fundamentally reshaping how children in Australia connect, communicate and participate online.

How will the ban work in practice?

Under the new law, children under 16 cannot open or hold social media accounts. The regulator has identified an initial list of ten major platforms that must comply, including TikTok, Instagram, Facebook, YouTube and Snapchat. Although some platforms may still allow limited access without an account, under-16s will no longer be able to maintain profiles, form connections or receive personalised recommendations – the core functions that define most social platforms today.

Several platforms have already begun proactively suspending accounts they suspect belong to under-16s ahead of the deadline. Others are likely preparing to lean heavily on their own inference models (internal signals already used to estimate user age) combined with third-party age check technology providers. In practice, this means many young people will soon be prompted to complete an age check if a platform believes they may be under the threshold. For legitimate users over 16, this will act as an “appeal” mechanism to restore access.

As an age check technology provider, we see this moment as both transformative and challenging, but ultimately an opportunity to build a healthier online ecosystem for young people.

The rationale behind the ban: Protecting young people

Supporters of the new rules point to mounting concerns about mental health harms, algorithmic exposure, and unsafe interactions. Delaying access to social media aims to give children more time before they encounter adult content, addictive design patterns, or the pressures of online comparison.

They also argue that younger teens are especially vulnerable to targeted advertising, algorithmic content loops and engagement-driven features designed for adults. Reducing early exposure may help curb cyberbullying and social isolation, giving children more time to develop resilience before entering highly interactive online spaces. The ban also gives parents and carers greater oversight of what their children encounter online, providing added protection during a critical stage of development.

As Australia takes this unprecedented step, it’s safe to say other governments will be watching closely.

Concerns and unintended consequences

However, the debate is far from one-sided. Some worry about a potential child who slips through the net may then be treated as an adult by platform algorithms, exposing them to more mature content rather than less. Others fear younger teens will gravitate towards lesser-known, less-regulated platforms, potentially increasing their exposure to harm.

There is also the social impact to consider: many teenagers rely on social media for communication, creativity and community, and losing access overnight may leave some feeling isolated – particularly if half a classroom retains access and half does not. Additionally, there are concerns about fairness and inclusion, as young people in rural or lower-income communities may lack access to traditional forms of identification used in some verification processes.

These trade-offs highlight the complexity of regulating social media compared with higher-risk online domains.

How technology will enable the ban

Ultimately, the success of the ban will depend on how seamlessly platforms can incorporate age checks. The good news is that this technology already exists, and it’s low-friction, privacy-preserving and inclusive. 

Importantly, biometric data is not required to carry out successful age checks. Innovative methods, such as Email-Based Age Estimation, can determine whether a user is highly likely to be 16 or over by analysing the services an email address has interacted or transacted with over time. These checks are fast, highly scalable and designed to preserve privacy – they do not identify a person, but simply estimate age with a high degree of confidence. This technique, invented by Verifymy, is now recognised by Ofcom in the UK as a highly effective form of age assurance under the Online Safety Act.

Other methods, such as Facial Age Estimation, may also be used. These tools include liveness detection to prevent spoofing with photos or masks. Plus, traditional ID scanning will remain available for users who prefer this approach, with face match technology deployed alongside document authentication checks to ensure that passports or driving licences are genuine and untampered with.

There will inevitably be challenges with some borderline cases, particularly for teens who have just turned 16, but platforms will support appeal processes so users can demonstrate their age through alternative routes. Modern age assurance systems therefore, offer a practical, inclusive and privacy-respecting way for platforms to comply with the new rules.

Age-appropriate experiences, not just bans

While the legislation focuses on restricting account access, it could be argued the wider conversation should be about enabling age-appropriate online experiences, not simply removing young people from platforms altogether.

Age checks can play a positive role by restricting under-16s from accessing high-risk features such as direct messaging, live streaming or mature content; by setting age-sensitive defaults for privacy and safety; and by empowering parents with optional oversight tools. Education, digital literacy, family conversations and better platform-level safety design also have critical roles to play.

Social media offers meaningful benefits – connection, creativity, community and learning – and millions of families rely on it for support. The goal should be to preserve these upsides while reducing the harms.

Looking ahead: A collaborative path forward

There is no single silver bullet for online safety. As Australia’s rollout unfolds, a collaborative effort will be essential to ensure the laws protect young people without unfairly limiting their opportunities to participate online.

Success will depend on cooperation between families and caregivers, children and young people themselves, schools and educators, regulators and policymakers, platforms and technology providers, and child-safety organisations.

Australia may be the first nation to implement such sweeping rules, but it will likely not be the last. This moment is an opportunity to build a model that balances protection, empowerment and fairness – and one that uses technology not as a barrier, but as a bridge to safer, more age-appropriate online experiences.

About the author

Verifymy

Verifymy is a safety technology provider on a mission to safeguard children and society online.

Subscribe and keep up to date

Related articles

Regulation

Brazil recently took a significant step toward protecting minors in digital spaces. On September 17, 2025, the country’s president signed Bill 2628/2022, known as the Digital ECA (Estatuto Digital da Criança e do Adolescente). The law will take effect on March 17, 2026, and will be enforced by the ANPD, Brazil’s data protection authority. The […]

3 min read
October 20, 2025
Webinar

As part of our ongoing collaboration with Internet Matters, we recently hosted a webinar focused on one of the most important recent developments in online safety: the rollout of age checks across online platforms. The background Under the UK’s Online Safety Act, sites and apps that host or publish pornography must implement highly effective age […]

2 min read
October 1, 2025
Regulation

On September 15th, New York’s Attorney General has released proposed rules for the SAFE for Kids Act, a landmark law requiring social media companies to protect under-18 users from algorithmically personalised (or “addictive”) feeds and late-night push notifications.  While the law is driven by evidence that algorithmically-curated feeds fuel depression, anxiety and other harms, the […]

3 min read
September 17, 2025
our solutions
industries
Company
resources