British Technology Companies and Child Protection Agencies to Examine AI's Ability to Generate Exploitation Content
Tech firms and child protection organizations will be granted authority to evaluate whether artificial intelligence tools can produce child exploitation images under new British laws.
Substantial Rise in AI-Generated Harmful Material
The announcement came as revelations from a safety watchdog showing that reports of AI-generated CSAM have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.
Updated Legal Structure
Under the amendments, the government will allow designated AI companies and child safety groups to inspect AI models – the underlying technology for conversational AI and visual AI tools – and ensure they have sufficient safeguards to stop them from creating images of child sexual abuse.
"Ultimately about preventing abuse before it occurs," stated Kanishka Narayan, adding: "Specialists, under strict protocols, can now identify the danger in AI models promptly."
Addressing Regulatory Obstacles
The changes have been implemented because it is against the law to create and possess CSAM, meaning that AI creators and other parties cannot generate such images as part of a testing regime. Until now, officials had to delay action until AI-generated CSAM was uploaded online before dealing with it.
This legislation is aimed at averting that problem by helping to halt the creation of those materials at their origin.
Legislative Structure
The changes are being introduced by the government as revisions to the crime and policing bill, which is also implementing a prohibition on owning, producing or sharing AI models developed to generate child sexual abuse material.
Real-World Consequences
This recently, the official toured the London base of a children's helpline and heard a simulated call to advisors involving a report of AI-based exploitation. The interaction portrayed a adolescent requesting help after being blackmailed using a explicit deepfake of themselves, constructed using AI.
"When I hear about children facing extortion online, it is a cause of intense frustration in me and rightful anger amongst parents," he said.
Alarming Statistics
A leading online safety foundation stated that cases of AI-generated exploitation material – such as webpages that may contain numerous images – had more than doubled so far this year.
Cases of the most severe material – the gravest form of exploitation – rose from 2,621 images or videos to 3,086.
- Female children were overwhelmingly victimized, making up 94% of illegal AI images in 2025
- Portrayals of infants to two-year-olds increased from five in 2024 to 92 in 2025
Sector Response
The legislative amendment could "represent a vital step to ensure AI tools are secure before they are launched," stated the head of the internet monitoring organization.
"Artificial intelligence systems have made it so survivors can be targeted all over again with just a few clicks, providing offenders the capability to create potentially endless quantities of advanced, photorealistic child sexual abuse material," she added. "Material which additionally commodifies victims' suffering, and renders young people, especially girls, less safe on and off line."
Support Session Information
Childline also published details of support interactions where AI has been mentioned. AI-related risks mentioned in the sessions comprise:
- Using AI to evaluate weight, physique and appearance
- Chatbots discouraging children from consulting safe adults about abuse
- Being bullied online with AI-generated material
- Digital extortion using AI-manipulated images
During April and September this year, the helpline delivered 367 support interactions where AI, chatbots and associated topics were discussed, four times as many as in the equivalent timeframe last year.
Half of the mentions of AI in the 2025 sessions were related to psychological wellbeing and wellbeing, encompassing utilizing chatbots for assistance and AI therapeutic applications.