The UK’s Internet Watch Foundation (IWF) has issued a stark warning over the alarming rise of AI-generated child sexual abuse material (CSAM), with videos becoming increasingly realistic and prevalent online.
In the first half of 2025, the IWF identified 1,286 illegal AI-generated child abuse videos, a shocking rise from just two cases in the same period last year.
Over 1,000 of these videos were classed as Category A — the most extreme and graphic level of abuse under UK law.
AI Misused by Paedophiles to Mass-Produce Abuse Content
According to the IWF, rapid investment in artificial intelligence has made video-generation tools widely accessible, and paedophiles are exploiting this technology at scale. One IWF analyst explained:
“AI is a highly competitive and well-funded industry. The sheer availability of tools makes it easy for perpetrators to generate convincing abuse material.”
The organisation reported a 400% increase in web pages (URLs) hosting AI-generated CSAM in the first six months of 2025. The watchdog recorded 210 URLs, up from 42 in 2024 — with many containing hundreds of AI-made images and videos.
One disturbing discovery on a dark web forum revealed a paedophile boasting about how fast the technology evolves: “I just master one tool and then something newer and better appears,” they wrote.
Real Victims Used to Train AI for Abuse
IWF analysts say abusers are using open-source AI models and “fine-tuning” them with actual CSAM, in some cases using only a handful of real abuse videos to teach the AI how to produce shockingly lifelike footage. Some of the most realistic AI abuse content was modelled on known real-life victims, the IWF confirmed.
Serious Criminal Risk and Legal Crackdown
Derek Ray-Hill, interim chief executive of the IWF, warned of a “potential explosion” in AI-driven CSAM, which could overwhelm the open internet:
“There is a very real risk that AI-generated abuse content spreads uncontrollably, fuelling serious crimes like child trafficking, sexual exploitation, and modern slavery.”
Ray-Hill also noted that AI allows offenders to flood the web with illegal material without involving new victims, amplifying the harm caused to existing survivors.
Government Action: New Laws Target AI Abuse Tools
The UK government is taking firm action, introducing legislation that makes it illegal to possess, create, or distribute AI tools specifically designed to generate CSAM. Under the new law, offenders could face up to five years in prison.
Additionally, manuals instructing users how to abuse AI for such purposes will also be outlawed, with those caught possessing them facing up to three years behind bars.
In a February announcement, Home Secretary Yvette Cooper said: “It’s absolutely vital we confront child sexual abuse both online and offline. The law must keep pace with the technology.”
AI-generated child sexual abuse content is already criminal under the Protection of Children Act 1978, which prohibits the creation, distribution, or possession of “indecent photographs or pseudo-photographs” of children.
