UK Technology Firms and Child Safety Agencies to Examine AI's Ability to Create Exploitation Content

Technology companies and child protection agencies will be granted authority to evaluate whether artificial intelligence tools can produce child abuse material under recently introduced UK legislation.

Significant Rise in AI-Generated Harmful Content

The declaration coincided with revelations from a protection watchdog showing that reports of AI-generated child sexual abuse material have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.

Updated Regulatory Structure

Under the changes, the government will permit approved AI companies and child protection organizations to examine AI models – the foundational technology for chatbots and visual AI tools – and ensure they have adequate protective measures to prevent them from creating depictions of child sexual abuse.

"Ultimately about preventing exploitation before it happens," declared the minister for AI and online safety, noting: "Specialists, under rigorous protocols, can now identify the danger in AI models promptly."

Tackling Legal Obstacles

The changes have been introduced because it is illegal to produce and own CSAM, meaning that AI developers and other parties cannot create such content as part of a evaluation regime. Previously, officials had to delay action until AI-generated CSAM was uploaded online before addressing it.

This legislation is aimed at preventing that problem by helping to stop the production of those images at their origin.

Legislative Structure

The changes are being added by the authorities as modifications to the crime and policing bill, which is also implementing a prohibition on possessing, creating or sharing AI systems developed to create child sexual abuse material.

Practical Consequences

This week, the minister visited the London base of Childline and heard a mock-up conversation to advisors involving a report of AI-based exploitation. The interaction portrayed a teenager requesting help after being blackmailed using a explicit deepfake of himself, constructed using AI.

"When I hear about children experiencing extortion online, it is a cause of extreme anger in me and rightful anger amongst parents," he stated.

Alarming Data

A prominent internet monitoring organization stated that cases of AI-generated abuse material – such as webpages that may include numerous files – had more than doubled so far this year.

Instances of the most severe material – the gravest form of abuse – rose from 2,621 visual files to 3,086.

  • Female children were predominantly targeted, making up 94% of prohibited AI depictions in 2025
  • Depictions of infants to toddlers increased from five in 2024 to 92 in 2025

Sector Response

The law change could "represent a vital step to guarantee AI tools are safe before they are released," stated the head of the internet monitoring organization.

"AI tools have made it so victims can be targeted all over again with just a few clicks, giving offenders the ability to make possibly endless amounts of sophisticated, photorealistic exploitative content," she continued. "Content which additionally exploits survivors' suffering, and makes children, particularly girls, more vulnerable both online and offline."

Counseling Session Information

Childline also published details of counselling interactions where AI has been referenced. AI-related harms mentioned in the conversations include:

  • Using AI to evaluate weight, body and appearance
  • AI assistants discouraging young people from consulting safe guardians about abuse
  • Facing harassment online with AI-generated material
  • Online blackmail using AI-manipulated pictures

During April and September this year, the helpline conducted 367 counselling interactions where AI, chatbots and associated terms were mentioned, significantly more as many as in the same period last year.

Half of the mentions of AI in the 2025 sessions were related to mental health and wellbeing, encompassing using AI assistants for support and AI therapy applications.

Mary Austin
Mary Austin

A seasoned blackjack enthusiast and strategy coach with over a decade of experience in casino gaming and player education.