Digital Safeguarding

Policy drawn up in response to a moral panic is rarely well thought through. The fever-pitch debate at present about how to address online trolling and abuse is perhaps as noisy as it has ever been. The seeds were sown during the World Cup when England’s players suffered racist abuse via social media in response […]

3 min read
November 16, 2021
Share this article:

Policy drawn up in response to a moral panic is rarely well thought through. The fever-pitch debate at present about how to address online trolling and abuse is perhaps as noisy as it has ever been. The seeds were sown during the World Cup when England’s players suffered racist abuse via social media in response to missed penalties during a penalty shootout. The row has reached a climax in Parliament as a result of the loss of one of its own, Sir David Amess MP, which has been linked, without, it has to be said, a great deal of evidence, to the “cesspit” that is our online discourse on social media.

Prove it

The immediate call is for all online social media users to prove their identity to the platform before making public comments. It is hoped that this will stop the abuse, but at what cost? Opponents rightly point to the benefits of anonymity for certain marginalised or vulnerable groups such as LGBT+, young adults in cultures that may disapprove of certain things, such as alcohol, or those who would benefit from accessing good quality information on the risks of illegal drugs but would not do so if they thought they might be identified.

There is clearly growing momentum behind demands that everyone prove who they are before doing anything online. Fifty Conservative Members of Parliament wrote to the Secretary of State demanding this, a number that would more than overturn the government majority if opposition parties supported such a move.

Clean up the internet

However, a more likely outcome is a compromise whereby users can voluntarily identify themselves, earning a tick against their username, although not necessarily required to share their real name for day-to-day interactions. Others could then choose not to engage with unverified users in a move that it is hoped would nudge people towards more civil discussions. This approach has been proposed by a campaign called “Clean up the Internet,” and it is attracting cross-party support, including unexpectedly some notable libertarians, who perhaps fear a more extreme policy.

As with any such reform, there is a risk of unintended consequences. Systematically disclosing your online identity could potentially exacerbate platforms’ ability to track your online activities, perhaps only for targeted advertising but perhaps more sinisterly for totalitarian states to target dissidents. The future of online Identity verification underscores the delicate balance between safety and privacy.

One potential solution that avoids this pitfall is tokenised identity. An independent and regulated third-party provider verifies that a user is a real person and, where necessary, perhaps their age, but does not pass on any personally identifiable information (PII) to the platforms a user accesses. Instead, it simply confirms that they are a real person. While this is still open to abuse, it can be regulated. Indeed, the government is already working on a plan for a Digital Identity And Attributes Trust Framework that would licence identity providers and could be adapted for this purpose.

The future of online Identity verification

It is not just online abuse driving the need to prove who you are before you go online. European and domestic legislation is increasingly demanding that websites put in place additional protections for children and young people, whether that is to prevent access to age-restricted goods, content and services or to ensure that personal data is not processed illegally when they are too young to give permission for this without their parent’s consent. GDPR creates the concept of a ‘digital age of consent.’ The Audio-Visual Media Services Directive requires video-sharing platforms to prevent kids from seeing adult content. More generally, minors can’t enter into binding contracts, so merely accepting a platform’s terms and conditions may not always be reliable unless the platform has confirmed the user is an adult.

In the UK, we will soon feel the impact of the Age-Appropriate Design Code, which recently came into force after a 12-month grace period. The code places an obligation on platforms not to show inappropriate material to younger children, creating a need to know not only if a user is an adult but also their approximate age if they are a child.

A tipping point?

As you will now have concluded, we are fast approaching, or may well have already reached, a tipping point when it is necessary to at least prove your age and probably your identity, at least to an independent, regulated third party, before you can do anything more than listen to a nursery rhyme online. For some, this is an outrageous affront to the right to complete freedom on the internet; for others, and quite probably the vast but silent majority of society, it is merely applying the norms of the real world that protect the physical and mental well-being of our children in an online context. Whatever your view, be prepared for change, whether as a consumer going online or as a service provider, keen to minimise any friction this will create for your customers.

As we consider the future of online Identity verification, we must recognize the implications for privacy and data protection in our increasingly digital lives. The future of online Identity verification demands a balance between safety and the freedoms that users cherish in the online space.

About the author

Verifymy

Verifymy is a safety technology provider on a mission to safeguard children and society online.

Subscribe and keep up to date

Related articles

Online Safety

Recent headlines involving X, its AI tools, and the proliferation of non-consensual synthetic imagery have brought a long-standing issue back into sharp focus: how do we protect people – and particularly children and those disproportionately targeted, including women and girls – from image-based abuse in an age of generative AI? While non-consensual intimate image (NCII) […]

4 min read
February 10, 2026
Regulation

Australia today becomes the first country in the world to introduce an outright ban on under-16s holding social media accounts, a move that has captured global attention. Governments from Europe to North America are already signalling that they are watching closely – not only to see whether the policy reduces online harms, but also to […]

4 min read
December 9, 2025
Regulation

Brazil recently took a significant step toward protecting minors in digital spaces. On September 17, 2025, the country’s president signed Bill 2628/2022, known as the Digital ECA (Estatuto Digital da Criança e do Adolescente). The law will take effect on March 17, 2026, and will be enforced by the ANPD, Brazil’s data protection authority. The […]

3 min read
October 20, 2025
our solutions
industries
Company
resources