UK Technology Firms and Child Safety Agencies to Examine AI's Ability to Generate Exploitation Content
Technology companies and child safety agencies will receive permission to assess whether AI systems can generate child exploitation material under recently introduced British legislation.
Significant Increase in AI-Generated Illegal Content
The declaration came as findings from a safety monitoring body showing that cases of AI-generated CSAM have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.
New Regulatory Structure
Under the changes, the authorities will allow designated AI developers and child protection groups to inspect AI models – the foundational technology for chatbots and image generators – and verify they have adequate safeguards to prevent them from creating depictions of child sexual abuse.
"Fundamentally about preventing exploitation before it happens," declared Kanishka Narayan, adding: "Experts, under strict conditions, can now detect the risk in AI models promptly."
Tackling Regulatory Challenges
The amendments have been implemented because it is against the law to create and own CSAM, meaning that AI creators and other parties cannot create such images as part of a testing process. Previously, officials had to wait until AI-generated CSAM was published online before dealing with it.
This law is designed to preventing that issue by enabling to halt the creation of those materials at source.
Legislative Framework
The changes are being added by the government as revisions to the crime and policing bill, which is also establishing a ban on owning, creating or sharing AI models developed to create child sexual abuse material.
Practical Consequences
This week, the minister toured the London base of Childline and listened to a mock-up call to counsellors featuring a account of AI-based abuse. The interaction portrayed a adolescent seeking help after facing extortion using a sexualised AI-generated image of themselves, constructed using AI.
"When I learn about young people experiencing extortion online, it is a cause of extreme frustration in me and justified concern amongst families," he said.
Alarming Statistics
A leading online safety organization stated that cases of AI-generated abuse content – such as webpages that may contain numerous files – had more than doubled so far this year.
Instances of the most severe content – the most serious form of abuse – rose from 2,621 images or videos to 3,086.
- Female children were overwhelmingly victimized, accounting for 94% of prohibited AI images in 2025
- Portrayals of newborns to toddlers increased from five in 2024 to 92 in 2025
Sector Reaction
The law change could "represent a vital step to ensure AI products are safe before they are released," stated the chief executive of the internet monitoring foundation.
"Artificial intelligence systems have enabled so survivors can be targeted all over again with just a few clicks, giving criminals the capability to make possibly limitless amounts of sophisticated, lifelike exploitative content," she added. "Content which further commodifies victims' trauma, and renders children, particularly female children, less safe both online and offline."
Support Interaction Data
Childline also released information of counselling sessions where AI has been mentioned. AI-related harms discussed in the conversations include:
- Employing AI to rate weight, physique and looks
- Chatbots discouraging children from talking to trusted guardians about abuse
- Facing harassment online with AI-generated content
- Online extortion using AI-manipulated images
During April and September this year, Childline conducted 367 counselling interactions where AI, conversational AI and related terms were mentioned, significantly more as many as in the equivalent timeframe last year.
Half of the references of AI in the 2025 sessions were related to psychological wellbeing and wellbeing, including using AI assistants for support and AI therapy apps.