This week, the Australian eSafety Commissioner registered six new industry-drafted codes, adding to the three introduced earlier in the year. Together, these nine codes mark a significant expansion of online safety regulation in Australia, and an important shift in focus.
Until now, industry codes and standards under the Online Safety Act have primarily dealt with illegal and restricted material – for example, child sexual abuse and exploitation material, pro-terror content, extreme crime and violence, and certain drug-related content.
The new codes go further. They are designed to reduce children’s exposure to harmful and age-inappropriate material which, while not illegal, can be damaging to safety, wellbeing, and development. This includes pornography and sexually explicit content, depictions of crime and violence, references to drug use, material promoting suicide or self-harm, eating disorders, and sexual violence.
Legally enforceable and backed by penalties of up to $49.5 million, the codes introduce new obligations across platforms and services to apply safeguards, filters, and in some cases, hard age checks.
What the new codes cover
The scope of the new requirements apply to:
- App stores
- Video games, messaging, and dating services
- Social media platforms
- Generative AI services and AI companion chatbots
- Websites and hosting services
- Internet service providers
- Search engines
- Equipment manufacturers and suppliers
This wide-ranging approach reflects the reality that children access digital content through many channels, not just high-profile social platforms.
The codes are legally enforceable, with civil penalties for breaches. Some obligations begin in December 2025, while all must be fully operational by September 2026.
Which services will require age checks?
According to eSafety’s guidance, mandatory age checks will be required for “high-risk” services, including:
- Pornography websites and other adult content sites
- Apps and online games rated R18+, including simulated gambling
- Social media features that allow pornography, self-harm content, or high-impact violence
- Generative AI services or chatbots capable of producing sexually explicit, harmful, or violent material without safeguards
- Messaging services linked to R18+ content
For “medium-risk” services, the codes introduce measures such as blurring graphic search results by default or down-ranking suicide-related content in favour of helplines and health information. Low-risk services see little change.
How will age assurance work?
The codes do not mandate a single method for age checks. Instead, platforms can choose from a range of acceptable approaches, so long as they meet the definition of “appropriate age assurance” and comply with Australian privacy law.
Examples include:
- Parental confirmation
- Photo ID or credit card checks
- Facial age estimation
- Digital identity wallets
- AI-driven age estimation based on relevant data inputs
- Third-party age assurance vendors
Whatever method is used, providers must minimise the collection of personal information and uphold privacy obligations under the Privacy Act. Importantly, the government will not have visibility of which sites users visit; the third-party age assurance provider manages the checks.
Principles behind the codes
Beyond technical requirements, the new framework also enshrines rights-based safeguards:
- Proportionality: higher-risk services carry stronger obligations
- Privacy: measures must be designed to avoid the over-collection of personal data
- Freedom of expression: adults must still be able to access lawful content
- Human rights online: protection of children balanced against user autonomy
In practice, this means the industry’s challenge is not only compliance but also trust. Platforms must show users that new safety measures are effective without being intrusive.
The inclusion of innovative age estimation methods
It’s encouraging to see innovative age estimation methods being recognised in the codes, such as the use of AI analysis to estimate age from existing data signals. One example is email-based age estimation, where analysis is performed on the user’s existing digital footprint by accessing commercially available transactional databases to review sites and apps where the user has previously used this email address, such as a financial institution, mortgage lender or utility provider.
This approach offers users a low-friction, inclusive alternative to more intrusive checks, such as uploading a passport or driver’s licence. For platforms, it can reduce drop-off rates and protect conversions while ensuring compliance with the new requirements.
Given that most platforms already collect email addresses as part of account creation, this technique perfectly encapsulates the principle of data minimisation – no additional personal data is required to complete an age check.
Looking ahead
Australia’s framework is one of the most ambitious globally, and it is likely to shape international conversations on online safety. With obligations phased in from late 2025 through 2026, online platforms and services now face the practical challenge of selecting and implementing solutions that balance effectiveness, compliance, and user trust.
The direction of travel is clear: age assurance is becoming a standard expectation for high-risk services. Alongside the upcoming minimum age requirement of 16 for social media platforms, due to take effect in December, the new codes underline just how far Australia is willing to go in setting global benchmarks for online child protection.
The task now is to ensure these measures are applied in ways that protect children, respect rights, and preserve privacy.