UK Tech Firms and Child Protection Agencies to Examine AI's Ability to Generate Abuse Content

Tech firms and child protection organizations will be granted authority to evaluate whether AI systems can produce child exploitation material under recently introduced UK laws.

Significant Rise in AI-Generated Illegal Content

The announcement came as revelations from a protection monitoring body showing that cases of AI-generated CSAM have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.

New Legal Structure

Under the changes, the government will permit approved AI companies and child protection organizations to inspect AI models – the foundational systems for conversational AI and image generators – and ensure they have adequate protective measures to prevent them from producing images of child sexual abuse.

"Ultimately about preventing abuse before it happens," declared Kanishka Narayan, noting: "Specialists, under strict protocols, can now identify the risk in AI systems early."

Addressing Legal Obstacles

The amendments have been introduced because it is illegal to create and own CSAM, meaning that AI creators and others cannot create such images as part of a testing process. Until now, authorities had to wait until AI-generated CSAM was uploaded online before addressing it.

This law is designed to preventing that problem by helping to stop the production of those images at source.

Legislative Structure

The changes are being added by the government as revisions to the crime and policing bill, which is also implementing a prohibition on owning, producing or distributing AI systems designed to generate exploitative content.

Real-World Consequences

This week, the official toured the London base of a children's helpline and heard a simulated call to advisors involving a report of AI-based abuse. The interaction portrayed a teenager seeking help after facing extortion using a sexualised AI-generated image of themselves, constructed using AI.

"When I hear about young people facing extortion online, it is a source of extreme frustration in me and justified anger amongst families," he stated.

Alarming Data

A leading internet monitoring organization stated that cases of AI-generated abuse content – such as online pages that may contain multiple images – had more than doubled so far this year.

Instances of category A content – the most serious form of exploitation – increased from 2,621 images or videos to 3,086.

  • Girls were predominantly targeted, accounting for 94% of prohibited AI depictions in 2025
  • Depictions of newborns to two-year-olds increased from five in 2024 to 92 in 2025

Industry Response

The law change could "constitute a vital step to ensure AI tools are safe before they are released," stated the head of the online safety organization.

"AI tools have enabled so survivors can be targeted repeatedly with just a few clicks, giving offenders the ability to create possibly endless amounts of sophisticated, lifelike exploitative content," she continued. "Material which further exploits survivors' trauma, and makes children, particularly female children, less safe both online and offline."

Counseling Interaction Data

The children's helpline also released information of support interactions where AI has been referenced. AI-related risks discussed in the conversations include:

  • Employing AI to evaluate weight, body and looks
  • Chatbots dissuading children from talking to trusted adults about harm
  • Being bullied online with AI-generated material
  • Digital blackmail using AI-faked pictures

Between April and September this year, the helpline delivered 367 counselling interactions where AI, chatbots and associated terms were discussed, significantly more as many as in the equivalent timeframe last year.

Half of the references of AI in the 2025 interactions were related to psychological wellbeing and wellbeing, including using AI assistants for assistance and AI therapy applications.

Richard Mitchell
Richard Mitchell

A passionate gamer and tech writer with over a decade of experience in reviewing video games and analyzing gaming trends.