British Tech Companies and Child Safety Agencies to Test AI's Ability to Generate Exploitation Content

Technology companies and child protection organizations will be granted permission to assess whether AI systems can generate child exploitation material under recently introduced British legislation.

Substantial Increase in AI-Generated Harmful Material

The declaration coincided with revelations from a safety monitoring body showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.

New Regulatory Framework

Under the amendments, the government will permit designated AI companies and child protection organizations to inspect AI models – the underlying systems for conversational AI and visual AI tools – and ensure they have sufficient safeguards to stop them from producing depictions of child sexual abuse.

"Fundamentally about preventing exploitation before it happens," declared Kanishka Narayan, noting: "Experts, under strict conditions, can now identify the danger in AI systems promptly."

Tackling Legal Obstacles

The changes have been implemented because it is against the law to produce and own CSAM, meaning that AI developers and others cannot generate such content as part of a evaluation regime. Until now, authorities had to wait until AI-generated CSAM was uploaded online before dealing with it.

This legislation is aimed at preventing that problem by enabling to stop the creation of those images at source.

Legal Structure

The amendments are being added by the authorities as revisions to the crime and policing bill, which is also implementing a prohibition on owning, creating or sharing AI models designed to create exploitative content.

Real-World Consequences

This week, the official toured the London base of Childline and listened to a simulated call to advisors featuring a account of AI-based exploitation. The call depicted a adolescent seeking help after facing extortion using a explicit deepfake of himself, created using AI.

"When I hear about young people facing blackmail online, it is a source of intense anger in me and justified anger amongst families," he said.

Alarming Data

A prominent online safety organization reported that cases of AI-generated abuse material – such as online pages that may include multiple images – had more than doubled so far this year.

Cases of the most severe content – the gravest form of exploitation – rose from 2,621 images or videos to 3,086.

  • Female children were overwhelmingly victimized, accounting for 94% of illegal AI images in 2025
  • Portrayals of infants to toddlers rose from five in 2024 to 92 in 2025

Industry Reaction

The law change could "constitute a vital step to ensure AI products are secure before they are released," stated the head of the online safety organization.

"Artificial intelligence systems have made it so survivors can be victimised all over again with just a simple actions, giving offenders the ability to create possibly endless amounts of sophisticated, lifelike exploitative content," she continued. "Material which additionally commodifies victims' trauma, and renders young people, especially female children, less safe both online and offline."

Counseling Session Information

The children's helpline also published information of counselling interactions where AI has been referenced. AI-related harms mentioned in the conversations comprise:

  • Using AI to evaluate body size, physique and looks
  • Chatbots discouraging children from talking to trusted adults about abuse
  • Facing harassment online with AI-generated material
  • Digital blackmail using AI-faked pictures

Between April and September this year, the helpline delivered 367 support interactions where AI, conversational AI and related topics were mentioned, significantly more as many as in the equivalent timeframe last year.

Half of the references of AI in the 2025 interactions were related to mental health and wellness, encompassing utilizing chatbots for support and AI therapy apps.

Connor Chapman
Connor Chapman

A passionate gaming journalist with over a decade of experience covering slot machines and casino trends across the UK.