Identity verification & content moderation
Quality control & content review
Home / Identity verification and content moderation / Quality control and content review
Ensuring accuracy through human oversight
A team of human moderators review all content flagged by our AI content moderation tool. In addition, as part of our ongoing quality control process, this team moderates a minimum of 1% of all content not flagged by AI.
We are Verifymy
Versatile Content Moderation
Our content moderation solution works across a wide range of media types, ensuring maximum compliance and user safety regardless of the platform.
Video
Live streaming
Images
How it works
All content flagged by AI moderation is sent for human moderation to confirm the flag is accurate, with all decisions logged to aid in the training and refinement of the AI performance.
Human moderators also review a small sample set of approved content for quality control purposes to confirm any automated decisions are accurate.
Quality control & content review​
Platform health, safety and integrity
Our content moderation solution works across a wide range of media types, ensuring maximum compliance and user safety regardless of the platform.
Platform health
User safety
Platform integrity
Content health assurance
By manually reviewing 1% of all content, we ensure the correct moderation decisions are being made, and the health of your content is maintained.
Our principles around data
Data retention
Data minimisation
Data integrity
GDPR
Identity verification, content moderation & consent management
Safeguarding dashboard
Content provider verification
Participant consent & verification
Content moderation
Live streaming moderation
Complaint resolution
Monthly reporting