British Tech Companies and Child Safety Agencies to Test AI's Ability to Generate Abuse Images
Technology companies and child protection organizations will receive permission to assess whether AI systems can generate child abuse images under recently introduced UK legislation.
Substantial Increase in AI-Generated Harmful Material
The declaration coincided with revelations from a protection monitoring body showing that reports of AI-generated child sexual abuse material have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.
Updated Regulatory Framework
Under the changes, the authorities will permit approved AI companies and child protection groups to examine AI systems – the underlying technology for conversational AI and image generators – and ensure they have sufficient safeguards to prevent them from creating depictions of child exploitation.
"Fundamentally about stopping exploitation before it happens," declared the minister for AI and online safety, noting: "Experts, under rigorous conditions, can now detect the danger in AI models promptly."
Tackling Regulatory Obstacles
The changes have been introduced because it is against the law to create and own CSAM, meaning that AI creators and others cannot generate such content as part of a evaluation regime. Until now, authorities had to wait until AI-generated CSAM was uploaded online before dealing with it.
This law is designed to preventing that issue by enabling to stop the creation of those materials at source.
Legislative Structure
The amendments are being introduced by the authorities as modifications to the criminal justice legislation, which is also establishing a prohibition on possessing, creating or sharing AI systems designed to generate child sexual abuse material.
Real-World Consequences
This week, the minister toured the London base of Childline and listened to a simulated call to advisors involving a report of AI-based abuse. The call portrayed a adolescent seeking help after being blackmailed using a sexualised deepfake of himself, constructed using AI.
"When I learn about young people experiencing extortion online, it is a cause of extreme frustration in me and justified anger amongst families," he stated.
Alarming Statistics
A prominent online safety organization reported that instances of AI-generated exploitation content – such as online pages that may contain numerous files – had more than doubled so far this year.
Instances of category A content – the most serious form of exploitation – increased from 2,621 images or videos to 3,086.
- Girls were predominantly victimized, accounting for 94% of illegal AI depictions in 2025
- Portrayals of infants to two-year-olds increased from five in 2024 to 92 in 2025
Sector Reaction
The legislative amendment could "constitute a vital step to ensure AI tools are secure before they are released," stated the head of the internet monitoring organization.
"Artificial intelligence systems have enabled so survivors can be victimised all over again with just a few clicks, providing criminals the capability to create possibly limitless quantities of advanced, photorealistic child sexual abuse material," she continued. "Material which further commodifies survivors' suffering, and renders children, especially girls, less safe both online and offline."
Counseling Session Information
The children's helpline also released details of support interactions where AI has been mentioned. AI-related harms mentioned in the sessions comprise:
- Using AI to rate weight, body and appearance
- Chatbots discouraging children from talking to trusted guardians about harm
- Facing harassment online with AI-generated content
- Online blackmail using AI-manipulated images
During April and September this year, the helpline delivered 367 counselling sessions where AI, chatbots and associated topics were discussed, significantly more as many as in the equivalent timeframe last year.
Half of the references of AI in the 2025 sessions were related to mental health and wellbeing, encompassing using AI assistants for assistance and AI therapy applications.