Online Safety

Recent headlines involving X, its AI tools, and the proliferation of non-consensual synthetic imagery have brought a long-standing issue back into sharp focus: how do we protect people – and particularly children and those disproportionately targeted, including women and girls – from image-based abuse in an age of generative AI? While non-consensual intimate image (NCII) […]

4 min read
February 10, 2026
Share this article:

Recent headlines involving X, its AI tools, and the proliferation of non-consensual synthetic imagery have brought a long-standing issue back into sharp focus: how do we protect people – and particularly children and those disproportionately targeted, including women and girls – from image-based abuse in an age of generative AI?

While non-consensual intimate image (NCII) abuse is not new, the emergence of widely accessible generative AI has fundamentally altered the scale, speed and accessibility of harm. What once required access to private images, technical capability, and deliberate distribution can now be created and shared in moments – often anonymously, and at viral scale. Too often, these tools are deployed without the safeguards needed to anticipate and prevent foreseeable misuse, leaving platforms to respond only after harm has already occurred.

This moment is not just about one platform or one tool. It is a signal that the systems underpinning the internet must evolve, and that AI innovation must be matched with robust guardrails from the outset. Safeguarding children and society online requires more than reactive takedowns or statements of intent. It requires prevention by design and a holistic approach that brings together governments, regulators, educators, technology providers and platforms around a shared duty of care.

Understanding NCII and AI-generated image abuse

NCII abuse – sometimes referred to as image-based sexual abuse – involves the creation or distribution of intimate images or videos of a person without their consent. This can include real images, manipulated images, or fully synthetic content designed to appear realistic, often referred to as “deepfake” imagery when AI is used to fabricate or alter a person’s likeness.

Crucially, harm does not depend on whether an image is “real”. When a person’s likeness or identity is used without permission, consent has been violated. For victims – particularly children and women who are disproportionately targeted by image-based abuse – the consequences can be severe and long-lasting, ranging from emotional distress and reputational damage to harassment, coercion and fear for personal safety.

The vast majority of explicit deepfake videos online are pornographic, and most of that content depicts women and girls. This highlights how image-based abuse in the age of generative AI overwhelmingly targets female subjects and underscores that NCII is a form of technology-facilitated gender-based violence, as well as a child safety issue. 

Generative AI has intensified this risk. Now acting as both a creation and distribution engine, AI has enabled non-consensual imagery to be fabricated instantly and amplified algorithmically, at viral speed. This collapses the gap between creation and circulation, turning NCII into a scalable form of harassment and intimidation.

Safeguarding children and those disproportionately targeted online is a shared responsibility

Safeguarding online in the age of generative AI has no single solution or silver bullet. Protecting children – and those disproportionately affected by image-based abuse – from NCII and other AI-enabled harms requires a holistic approach built on shared responsibility across the entire digital ecosystem

Platforms must design products with safety built in from the outset, including guardrails around AI tools, proactive content moderation, and meaningful consent mechanisms. Technology providers play a critical role by enabling privacy-preserving age assurance, identity verification, content moderation and consent management at scale. Governments and regulators set the baseline expectations, increasingly moving from high-level principles to requirements for proactive detection and demonstrable real-world outcomes. 

At the same time, educators, parents and civil society are essential in building digital literacy and reinforcing that consent, privacy and respect apply online just as they do offline. Investors and leadership teams also shape priorities, influencing whether child protection and user safety is embedded into product strategy from the outset or addressed only in response to crisis. 

The most effective approaches recognise that safeguarding children and protecting those disproportionately targeted by image-based abuse online is a shared duty of care, and that prevention must be systemic, not siloed.

Prevention by design and the role of online safety technology

The most effective way to prevent AI-generated non-consensual intimate imagery is to build robust safety guardrails directly into AI systems themselves, stopping harmful content from being created in the first place. When generative tools are designed with clear boundaries, technical safeguards and abuse prevention mechanisms from the outset, the risk of large-scale harm can be significantly reduced.

However, this is not always the reality – and even well-designed systems cannot eliminate risk entirely. In these cases, online safety technology plays a critical complementary role in protecting children and wider society. And crucially, the technology needed to prevent NCII abuse already exists. Technology providers enable platforms to move closer to prevention by design through a combination of privacy-preserving age assurance, identity verification, content moderation and consent management solutions. 

Used together, these technologies can help establish accountability at the point of content creation, ensure that all individuals appearing in content have actively consented, detect and prevent harmful or illegal material before it spreads, and provide clear, effective pathways for reporting and redress when concerns are raised. When embedded holistically across the content lifecycle – before, during and after upload – these safeguards reduce the burden on victims, limit the reach of harmful content, and create safer online environments for children and wider society.

Why holistic safety matters

Identity without consent is insufficient. Content moderation without prevention is reactive. Reporting without accountability places the burden on victims.

Safeguarding children and society online requires joined-up systems that reinforce one another – combining regulation, education and technology to prevent harm before it occurs, limit its spread when it does, and ensure accountability throughout.

As AI continues to evolve, so too must the infrastructure that governs its use. The platforms and ecosystems that earn trust in the years ahead will be those that embed safety, child protection and safeguards against image-based abuse into their architecture from the outset, recognising that innovation and safeguarding are not opposing forces, but inseparable responsibilities.

👉 Get in touch to explore how online safety tech can help create safer online environments.


About the author

Verifymy

Verifymy is a safety technology provider on a mission to safeguard children and society online.

Subscribe and keep up to date

Related articles

Regulation

Australia today becomes the first country in the world to introduce an outright ban on under-16s holding social media accounts, a move that has captured global attention. Governments from Europe to North America are already signalling that they are watching closely – not only to see whether the policy reduces online harms, but also to […]

4 min read
December 9, 2025
Regulation

Brazil recently took a significant step toward protecting minors in digital spaces. On September 17, 2025, the country’s president signed Bill 2628/2022, known as the Digital ECA (Estatuto Digital da Criança e do Adolescente). The law will take effect on March 17, 2026, and will be enforced by the ANPD, Brazil’s data protection authority. The […]

3 min read
October 20, 2025
Webinar

As part of our ongoing collaboration with Internet Matters, we recently hosted a webinar focused on one of the most important recent developments in online safety: the rollout of age checks across online platforms. The background Under the UK’s Online Safety Act, sites and apps that host or publish pornography must implement highly effective age […]

2 min read
October 1, 2025
our solutions
industries
Company
resources