British Technology Companies and Child Protection Agencies to Test AI's Ability to Create Exploitation Images

Tech firms and child safety agencies will be granted authority to assess whether artificial intelligence systems can produce child exploitation material under new British laws.

Significant Increase in AI-Generated Harmful Material

The announcement coincided with findings from a protection monitoring body showing that cases of AI-generated CSAM have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.

Updated Regulatory Framework

Under the amendments, the authorities will permit approved AI companies and child protection organizations to examine AI systems – the underlying technology for conversational AI and visual AI tools – and ensure they have sufficient safeguards to prevent them from producing depictions of child exploitation.

"Fundamentally about preventing exploitation before it occurs," stated the minister for AI and online safety, noting: "Experts, under strict conditions, can now identify the risk in AI systems early."

Addressing Legal Obstacles

The amendments have been implemented because it is illegal to create and possess CSAM, meaning that AI developers and others cannot generate such content as part of a evaluation regime. Previously, officials had to delay action until AI-generated CSAM was uploaded online before dealing with it.

This legislation is designed to preventing that issue by helping to halt the production of those materials at their origin.

Legislative Structure

The changes are being introduced by the government as modifications to the criminal justice legislation, which is also establishing a ban on owning, creating or sharing AI systems designed to generate exploitative content.

Real-World Impact

This week, the minister toured the London base of Childline and heard a mock-up call to counsellors featuring a account of AI-based exploitation. The call portrayed a teenager seeking help after being blackmailed using a sexualised deepfake of himself, constructed using AI.

"When I learn about children experiencing blackmail online, it is a source of intense frustration in me and justified concern amongst parents," he said.

Concerning Data

A leading internet monitoring foundation stated that cases of AI-generated exploitation material – such as webpages that may contain numerous images – had significantly increased so far this year.

Cases of the most severe content – the gravest form of exploitation – increased from 2,621 visual files to 3,086.

  • Girls were overwhelmingly victimized, accounting for 94% of prohibited AI images in 2025
  • Depictions of infants to toddlers increased from five in 2024 to 92 in 2025

Sector Response

The law change could "represent a crucial step to guarantee AI tools are safe before they are released," commented the head of the online safety foundation.

"Artificial intelligence systems have enabled so victims can be victimised all over again with just a few clicks, giving offenders the ability to make possibly endless amounts of advanced, photorealistic child sexual abuse material," she added. "Material which further exploits survivors' suffering, and makes young people, particularly female children, less safe both online and offline."

Support Session Data

The children's helpline also published information of counselling interactions where AI has been mentioned. AI-related harms discussed in the sessions comprise:

  • Employing AI to evaluate body size, physique and appearance
  • Chatbots discouraging young people from talking to trusted guardians about harm
  • Facing harassment online with AI-generated material
  • Digital blackmail using AI-manipulated images

Between April and September this year, the helpline delivered 367 counselling sessions where AI, conversational AI and related topics were mentioned, four times as many as in the equivalent timeframe last year.

Half of the references of AI in the 2025 sessions were connected with psychological wellbeing and wellness, including utilizing chatbots for assistance and AI therapeutic apps.

Kevin Olson
Kevin Olson

A passionate traveler and storyteller, Elara shares insights from her global adventures to inspire others.

Popular Post