UK Tech Firms and Child Protection Agencies to Test AI's Ability to Create Exploitation Images
Technology companies and child protection organizations will receive permission to evaluate whether AI tools can produce child exploitation images under recently introduced UK laws.
Substantial Increase in AI-Generated Harmful Material
The announcement came as findings from a safety watchdog showing that cases of AI-generated child sexual abuse material have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.
Updated Regulatory Framework
Under the changes, the authorities will allow approved AI companies and child protection organizations to examine AI systems – the foundational technology for conversational AI and image generators – and ensure they have sufficient protective measures to prevent them from producing depictions of child exploitation.
"Ultimately about stopping abuse before it happens," stated the minister for AI and online safety, noting: "Specialists, under rigorous conditions, can now detect the danger in AI models promptly."
Addressing Regulatory Challenges
The changes have been introduced because it is illegal to create and own CSAM, meaning that AI developers and others cannot generate such images as part of a testing regime. Previously, authorities had to delay action until AI-generated CSAM was published online before dealing with it.
This legislation is designed to preventing that issue by helping to stop the creation of those images at their origin.
Legislative Framework
The amendments are being introduced by the authorities as modifications to the crime and policing bill, which is also establishing a ban on possessing, creating or distributing AI systems designed to create child sexual abuse material.
Practical Impact
This recently, the official toured the London base of a children's helpline and listened to a mock-up call to advisors featuring a account of AI-based exploitation. The call portrayed a teenager requesting help after facing extortion using a explicit deepfake of himself, created using AI.
"When I learn about young people experiencing extortion online, it is a cause of extreme frustration in me and rightful anger amongst families," he said.
Alarming Data
A prominent internet monitoring organization stated that cases of AI-generated abuse material – such as online pages that may contain numerous files – had significantly increased so far this year.
Instances of the most severe material – the most serious form of abuse – increased from 2,621 images or videos to 3,086.
- Female children were predominantly victimized, making up 94% of illegal AI images in 2025
- Depictions of infants to toddlers increased from five in 2024 to 92 in 2025
Industry Reaction
The law change could "constitute a vital step to ensure AI products are safe before they are released," stated the chief executive of the internet monitoring organization.
"Artificial intelligence systems have enabled so survivors can be victimised repeatedly with just a few clicks, providing offenders the ability to create potentially limitless quantities of advanced, photorealistic child sexual abuse material," she continued. "Content which further exploits victims' suffering, and makes children, particularly female children, less safe on and off line."
Support Session Data
The children's helpline also published details of counselling interactions where AI has been referenced. AI-related risks mentioned in the conversations include:
- Using AI to evaluate weight, body and appearance
- AI assistants dissuading young people from consulting trusted adults about abuse
- Being bullied online with AI-generated material
- Online extortion using AI-faked pictures
During April and September this year, Childline delivered 367 support sessions where AI, conversational AI and associated terms were discussed, four times as many as in the same period last year.
Fifty percent of the mentions of AI in the 2025 sessions were connected with psychological wellbeing and wellness, including using AI assistants for assistance and AI therapy applications.