British Tech Firms and Child Safety Agencies to Test AI's Capability to Create Exploitation Images
Tech firms and child safety organizations will be granted authority to evaluate whether artificial intelligence systems can produce child exploitation material under new British laws.
Substantial Rise in AI-Generated Illegal Material
The announcement coincided with revelations from a safety watchdog showing that reports of AI-generated child sexual abuse material have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.
New Regulatory Framework
Under the changes, the authorities will permit designated AI developers and child safety organizations to inspect AI systems – the foundational technology for chatbots and visual AI tools – and verify they have adequate safeguards to stop them from producing depictions of child exploitation.
"Ultimately about preventing exploitation before it occurs," stated Kanishka Narayan, adding: "Specialists, under strict protocols, can now identify the risk in AI models promptly."
Tackling Legal Challenges
The changes have been implemented because it is illegal to produce and possess CSAM, meaning that AI creators and other parties cannot generate such content as part of a testing regime. Previously, authorities had to delay action until AI-generated CSAM was uploaded online before addressing it.
This legislation is aimed at averting that problem by enabling to halt the production of those materials at their origin.
Legal Framework
The amendments are being introduced by the authorities as modifications to the criminal justice legislation, which is also implementing a prohibition on possessing, producing or distributing AI systems designed to generate child sexual abuse material.
Practical Consequences
This week, the minister toured the London headquarters of Childline and listened to a mock-up conversation to advisors involving a report of AI-based exploitation. The interaction depicted a teenager requesting help after being blackmailed using a sexualised AI-generated image of himself, constructed using AI.
"When I hear about young people experiencing blackmail online, it is a cause of extreme anger in me and justified concern amongst parents," he said.
Concerning Statistics
A leading online safety foundation stated that instances of AI-generated exploitation material – such as webpages that may contain multiple files – had more than doubled so far this year.
Instances of category A material – the most serious form of abuse – increased from 2,621 images or videos to 3,086.
- Female children were overwhelmingly targeted, accounting for 94% of illegal AI images in 2025
- Depictions of infants to toddlers rose from five in 2024 to 92 in 2025
Industry Reaction
The law change could "represent a vital step to ensure AI tools are safe before they are launched," stated the head of the internet monitoring foundation.
"Artificial intelligence systems have enabled so victims can be victimised repeatedly with just a few clicks, providing offenders the ability to make possibly limitless amounts of sophisticated, lifelike exploitative content," she added. "Content which additionally commodifies victims' trauma, and renders young people, particularly female children, less safe on and off line."
Counseling Session Information
The children's helpline also released information of support interactions where AI has been mentioned. AI-related risks mentioned in the conversations include:
- Using AI to evaluate body size, body and looks
- AI assistants dissuading children from consulting safe guardians about harm
- Facing harassment online with AI-generated content
- Online extortion using AI-manipulated images
During April and September this year, Childline conducted 367 support interactions where AI, chatbots and related topics were mentioned, significantly more as many as in the same period last year.
Fifty percent of the mentions of AI in the 2025 sessions were related to psychological wellbeing and wellness, encompassing utilizing AI assistants for support and AI therapeutic applications.