Regulation

On September 15th, New York’s Attorney General has released proposed rules for the SAFE for Kids Act, a landmark law requiring social media companies to protect under-18 users from algorithmically personalised (or “addictive”) feeds and late-night push notifications.  While the law is driven by evidence that algorithmically-curated feeds fuel depression, anxiety and other harms, the […]

3 min read
September 17, 2025
Share this article:

On September 15th, New York’s Attorney General has released proposed rules for the SAFE for Kids Act, a landmark law requiring social media companies to protect under-18 users from algorithmically personalised (or “addictive”) feeds and late-night push notifications. 

While the law is driven by evidence that algorithmically-curated feeds fuel depression, anxiety and other harms, the heart of the new guidelines is about practical, privacy-preserving age assurance to make those protections real.

Age assurance as the gatekeeper

Before a platform can deliver an algorithmic feed or send notifications between midnight and 6 a.m., it must first know whether the user is an adult. If they are not, the child will require parental consent before this functionality can be switched on. The New York Attorney General’s draft rules set smart, clear standards for commercially reasonable and technically feasible methods of age assurance, designed to be inclusive, robust and privacy-preserving:

Multiple proven methods

Platforms can choose from several age-check methods – for example, requesting an uploaded image or video, taking a user’s email address or phone number and cross-checking it against other data to verify their age, or using other proven techniques – so long as they’re shown to be effective and privacy-preserving.

Interestingly, the guidance states each company must offer at least one alternative method alongside requesting government-issued ID, so users aren’t forced to present official documents or excluded if they don’t have one.

Privacy by design

Platforms must apply data minimisation and deletion, meaning any information gathered to determine age or obtain parental consent must be used only for that purpose and deleted or de-identified immediately.

Operators must apply industry-standard encryption and security controls, and may adopt zero-knowledge proof (ZKP) or double-blind age checks, which let a user prove their age without sharing personal information.

Accuracy thresholds and safeguards

The draft rules introduce minimum accuracy levels that vary by age group, recognising that it is easier to distinguish younger children from adults than a 17-year-old from someone just over 18. For example, false-positive (i.e. minors mistakenly identified as adults) limits tighten from 0.1% for ages 0-7 up to 15% for 17-year-olds, and every method must detect at least 98% of attempted circumventions.

At least one method implemented must also meet a “total accuracy minimum,” which counts inconclusive results, ensuring that overall performance remains high.

Independent testing and certification

Platforms must choose a high-accuracy method, test it annually, and retain results for at least 10 years to demonstrate compliance.

Additionally, every method must be certified each year by an accredited independent third party (for example, under ISO/IEC 27566 or IEEE 2089.1), with testing that verifies accuracy, false positives/negatives, circumvention detection, and data-deletion and security practices.

Lifecycle updates

Finally, young users who turn 18 must be presented with a simple way to update their age status, so that protections adjust automatically as they become adults.

Built-in parental control

The proposal also details how parental consent will work when minors seek algorithmic feeds or late-night notifications:

  • A minor must approve the request before a parent is contacted, ensuring young people retain agency.
  • Parents and minors can grant or withdraw consent at any time.
  • The platform is not required to show parents the user’s search history or topics of interest in order to obtain parental consent.
  • Refusing consent cannot block general access to the platform’s content or search features.

This design balances parents’ oversight with a child’s right to participate safely online.

Why the “how” matters

It’s encouraging to see such a balanced approach to age assurance built into the SAFE for Kids Act. The rules are inclusive, robust, and privacy-preserving, without being overly restrictive. Rather than relying only on traditional identity documents for age checks, they embrace innovative age estimation and inference (for example, checks based on a user’s email address), while recognising that accuracy is highest for younger children and naturally more nuanced for 17-year-olds close to adulthood.

Importantly, the draft rules note that when a platform already holds the necessary information and user consent, an age-inference check may take place without separately alerting the user at that moment. This enables frictionless age checks: for instance, email-based age checks can run entirely in the background, using information already supplied by a user at sign-up, and meet strict accuracy requirements without cumbersome interruptions to their user experience.

Meanwhile, the law targets the underlying risk. Algorithmic, non-chronological feeds are designed to maximise engagement rather than simply deliver information, surfacing content from outside a child’s network and making it hard to stop scrolling. For under-18s, the SAFE for Kids Act flips that default: no verified age, no algorithmic feed. By requiring age verification up front, it ensures minors default to chronological, age-appropriate feeds unless a parent opts in.

By embedding age checks at the front door, the law tackles the problem before harmful patterns can form. It’s a structural fix, not just a warning label.

What comes next

The proposed rules are open for public comment until December 1, 2025. After that, the Attorney General will finalise the rules within a year, and the law will take effect 180 days later. Platforms that fail to comply face civil penalties of up to $5,000 per violation and other enforcement actions.

If your platform needs help preparing for compliance, please get in touch.


About the author

Verifymy

Verifymy is a safety technology provider on a mission to safeguard children and society online.

Subscribe and keep up to date

Related articles

Online Safety

Recent headlines involving X, its AI tools, and the proliferation of non-consensual synthetic imagery have brought a long-standing issue back into sharp focus: how do we protect people – and particularly children and those disproportionately targeted, including women and girls – from image-based abuse in an age of generative AI? While non-consensual intimate image (NCII) […]

4 min read
February 10, 2026
Regulation

Australia today becomes the first country in the world to introduce an outright ban on under-16s holding social media accounts, a move that has captured global attention. Governments from Europe to North America are already signalling that they are watching closely – not only to see whether the policy reduces online harms, but also to […]

4 min read
December 9, 2025
Regulation

Brazil recently took a significant step toward protecting minors in digital spaces. On September 17, 2025, the country’s president signed Bill 2628/2022, known as the Digital ECA (Estatuto Digital da Criança e do Adolescente). The law will take effect on March 17, 2026, and will be enforced by the ANPD, Brazil’s data protection authority. The […]

3 min read
October 20, 2025
our solutions
industries
Company
resources