AI-generated child abuse web pages surge 400 per cent, alarming watchdog

Sign up now: Get ST's newsletters delivered to your inbox

AI tools are increasingly being used to generate child sexual abuse videos using the likeness of real children.

AI tools are increasingly being used to generate child sexual abuse videos using the likeness of real children.

PHOTO: ST FILE

Follow topic:

Reports of child sexual abuse imagery created using artificial intelligence tools have surged 400 per cent in the first half of 2025, according to new data from the Britain-based non-profit Internet Watch Foundation (IWF).

The organisation, which monitors child sexual abuse material online, recorded 210 web pages containing AI-generated material in the first six months of 2025, up from 42 in the same period the year before, according to a report published this week.

On those pages were 1,286 videos, up from just two in 2024.

The majority of this content was so realistic it had to be treated under British law as if it were actual footage, the IWF said.

Roughly 78 per cent of the videos – 1,006 in total – were classified as “Category A”, the most severe level, which can include depictions of rape, sexual torture and bestiality, the IWF said.

Most of the videos involved girls and, in some cases, used the likenesses of real children.

The growing prevalence of AI-generated child abuse material has alarmed law enforcement worldwide.

As generative AI tools become more accessible and sophisticated, the quality of the pictures and videos are improving, making it harder than ever to detect using traditional techniques.

While early videos were short and glitchy, the IWF now sees longer, more realistic productions featuring complex scenes and varied settings.

The authorities say the content is often used for harassment and extortion.

“Just as we saw with still images, AI videos of child sexual abuse have now reached the point they can be indistinguishable from genuine films,” said Mr Derek Ray-Hill, interim chief executive of the IWF.

“The children being depicted are often real and recognisable. The harm this material does is real, and the threat it poses threatens to escalate even further,” he said.

Taking action

Law enforcement agencies are starting to take action.

In a coordinated operation earlier in 2024, Europol arrested 25 individuals in connection with distributing such material.

More than 250 suspects were identified across 19 countries, Bloomberg reported.

The IWF called for Britain to develop a regulatory framework to ensure AI models have controls to block the production of this type of material.

In February, Britain became

the first country to criminalise

the creation and distribution of AI tools intended to generate child abuse content.

The law bans possession of AI models optimised to produce such material, as well as manuals that instruct offenders on how to do so.

In the United States, the National Centre for Missing and Exploited Children – an IWF counterpart – said it received more than 7,000 reports related to AI-generated child sexual abuse content in 2024.

While most commercial AI tools include safeguards against generating abusive content, some open source or custom models lack these protections, making them vulnerable to misuse. BLOOMBERG

See more on