🔗 Share this article UK Tech Companies and Child Safety Agencies to Examine AI's Capability to Create Abuse Content Tech firms and child protection agencies will receive authority to assess whether artificial intelligence systems can produce child abuse images under new UK laws. Substantial Increase in AI-Generated Illegal Content The declaration coincided with findings from a protection monitoring body showing that cases of AI-generated child sexual abuse material have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025. Updated Regulatory Framework Under the amendments, the government will allow designated AI companies and child safety organizations to examine AI models – the foundational technology for conversational AI and image generators – and verify they have adequate safeguards to prevent them from producing images of child sexual abuse. "Ultimately about stopping exploitation before it occurs," stated the minister for AI and online safety, adding: "Specialists, under strict protocols, can now identify the danger in AI systems early." Tackling Legal Obstacles The amendments have been implemented because it is illegal to produce and possess CSAM, meaning that AI creators and others cannot create such content as part of a testing regime. Previously, authorities had to wait until AI-generated CSAM was published online before addressing it. This legislation is designed to averting that problem by helping to halt the creation of those materials at source. Legal Framework The changes are being introduced by the government as revisions to the crime and policing bill, which is also implementing a prohibition on possessing, producing or sharing AI models developed to generate exploitative content. Real-World Consequences This week, the minister visited the London base of a children's helpline and heard a mock-up conversation to advisors involving a report of AI-based abuse. The call depicted a teenager requesting help after facing extortion using a explicit deepfake of himself, created using AI. "When I learn about children facing blackmail online, it is a cause of intense anger in me and justified anger amongst families," he said. Alarming Statistics A prominent internet monitoring foundation reported that instances of AI-generated abuse material – such as webpages that may include multiple images – had significantly increased so far this year. Instances of the most severe material – the gravest form of abuse – increased from 2,621 visual files to 3,086. Girls were overwhelmingly victimized, accounting for 94% of prohibited AI images in 2025 Portrayals of newborns to toddlers rose from five in 2024 to 92 in 2025 Sector Reaction The legislative amendment could "represent a vital step to ensure AI tools are secure before they are released," commented the chief executive of the internet monitoring organization. "Artificial intelligence systems have made it so survivors can be targeted all over again with just a few clicks, providing offenders the capability to create potentially endless quantities of advanced, lifelike exploitative content," she added. "Content which additionally commodifies victims' trauma, and renders young people, particularly female children, more vulnerable on and off line." Support Session Information Childline also published details of support interactions where AI has been mentioned. AI-related harms discussed in the conversations comprise: Using AI to rate weight, physique and appearance Chatbots discouraging young people from consulting safe guardians about abuse Facing harassment online with AI-generated content Online extortion using AI-manipulated pictures During April and September this year, Childline delivered 367 support interactions where AI, conversational AI and related topics were discussed, significantly more as many as in the same period last year. Fifty percent of the references of AI in the 2025 sessions were related to mental health and wellness, including utilizing AI assistants for assistance and AI therapy apps.