British Technology Companies and Child Safety Officials to Test AI's Ability to Generate Exploitation Images
Tech firms and child protection organizations will be granted permission to evaluate whether artificial intelligence systems can generate child abuse material under recently introduced UK legislation.
Substantial Rise in AI-Generated Illegal Material
The announcement coincided with revelations from a protection watchdog showing that cases of AI-generated CSAM have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.
New Regulatory Structure
Under the amendments, the authorities will permit approved AI developers and child safety groups to inspect AI models – the underlying systems for conversational AI and visual AI tools – and verify they have sufficient protective measures to prevent them from producing depictions of child exploitation.
"Fundamentally about stopping exploitation before it occurs," stated Kanishka Narayan, noting: "Experts, under rigorous protocols, can now detect the risk in AI systems early."
Addressing Legal Obstacles
The amendments have been introduced because it is illegal to produce and own CSAM, meaning that AI creators and other parties cannot create such content as part of a testing regime. Until now, officials had to wait until AI-generated CSAM was uploaded online before dealing with it.
This legislation is designed to averting that problem by enabling to halt the production of those materials at their origin.
Legal Structure
The changes are being added by the government as revisions to the criminal justice legislation, which is also establishing a ban on possessing, creating or sharing AI systems developed to generate child sexual abuse material.
Real-World Consequences
This week, the minister toured the London base of a children's helpline and heard a simulated call to advisors featuring a account of AI-based abuse. The interaction depicted a adolescent seeking help after facing extortion using a explicit deepfake of himself, created using AI.
"When I hear about young people experiencing extortion online, it is a source of extreme frustration in me and rightful anger amongst parents," he said.
Concerning Statistics
A prominent online safety foundation stated that instances of AI-generated exploitation content – such as webpages that may contain multiple files – had more than doubled so far this year.
Instances of the most severe material – the most serious form of exploitation – rose from 2,621 visual files to 3,086.
- Female children were predominantly victimized, accounting for 94% of illegal AI depictions in 2025
- Depictions of newborns to two-year-olds rose from five in 2024 to 92 in 2025
Industry Reaction
The law change could "represent a crucial step to guarantee AI products are safe before they are launched," stated the head of the internet monitoring organization.
"Artificial intelligence systems have enabled so victims can be victimised all over again with just a few clicks, providing offenders the ability to create possibly endless quantities of sophisticated, photorealistic child sexual abuse material," she continued. "Content which further commodifies victims' trauma, and renders children, particularly girls, more vulnerable both online and offline."
Support Interaction Data
Childline also released details of support sessions where AI has been mentioned. AI-related harms mentioned in the conversations include:
- Using AI to rate weight, body and looks
- AI assistants discouraging children from consulting safe guardians about abuse
- Being bullied online with AI-generated content
- Online blackmail using AI-manipulated images
During April and September this year, Childline delivered 367 counselling interactions where AI, chatbots and related terms were mentioned, four times as many as in the equivalent timeframe last year.
Half of the references of AI in the 2025 sessions were connected with mental health and wellbeing, encompassing utilizing AI assistants for support and AI therapeutic applications.