The numbers are staggering, but experts say what we’re seeing is only the beginning. As AI-generated child sexual abuse material, or CSAM, surges to record levels, researchers warn that the technology isn’t just producing more harmful content, but it’s fundamentally changing how children are targeted; how survivors are revictimized; and how investigators are overwhelmed.
Investigators already had their hands full with scrubbing CSAM from the internet. But with generative AI, that challenge has been exacerbated. The Internet Watch Foundation (IWF), Europe’s largest hotline for combating online child sexual abuse imagery, documented a 260-fold increase in AI-generated child sexual abuse videos in 2025. It went from just 13 videos the year prior to 3,443. Researchers who have spent years tracking this issue say the explosion is not a surprise. It is, however, a warning.
“Any numbers that we see, it’s the tip of the iceberg,” said Melissa Stroebel, vice president of research and strategic insights at Thorn, a nonprofit that builds technology to combat online child sexual exploitation. “That is about what has been either detected or proactively reported.”
The surge is a direct consequence of generative AI becoming faster, cheaper, and more accessible to bad actors. Thorn has identified three distinct ways these tools are now being weaponized against children.
The first is the revictimization of historical abuse survivors. A child who was abused in 2010 and whose images have circulated online for over a decade now faces an entirely new layer of harm. Offenders are using AI to take those existing images and personalize them: inserting themselves into recorded scenes of abuse to produce new material.
“In the same way that you can Photoshop Grandma who missed the Christmas picture into the Christmas picture,” Stroebel told Fortune, “bad actors can Photoshop themselves into scenes and records of an identified child.” That process creates fresh victimization for survivors who may have spent years trying to move past their abuse.
The second is the weaponization of innocent images. A photo of a child on a school soccer team web page is now potential source material for abuse. With widely available AI tools, an offender can convert that entirely benign image into sexual abuse material in minutes. Thorn is also documenting peer-on-peer cases, where a young person generates abusive imagery of a classmate without fully grasping the severity of the harm they are causing.
The third, and most systemic, impact is the strain being placed on already overwhelmed reporting pipelines. The National Center for Missing and Exploited Children receives tens of millions of CSAM reports every year. The speed with which AI can now generate novel material dramatically compounds that burden and creates a new urgency. When a new image arrives, investigators must determine whether it depicts a child in active danger right now, or is an AI-generated image.
“Those are really critical inputs to help them triage and respond to these cases,” Stroebel said. AI-generated content makes those determinations significantly harder, but she added both cases of an image taken in real time and an AI-generated image are reported and treated the same way by authorities.
The technology has also made some of the most repeated child safety guidance dangerously outdated. For years, children have been warned not to share images online as a basic safeguard against exploitation. That advice no longer holds. Thorn’s own research found that one in 17 young people have personally experienced deepfake imagery abuse, and one in eight knew someone who had been targeted. Victims of sextortion are now being sent images that look exactly like them—images they never took.
“There’s no need for a child to have shared an image any longer for them to be targeted for exploitation,” Stroebel said.
On the detection front, traditional hashing technology, which works like a digital fingerprint for known abuse files, cannot identify AI-generated content because each synthetically created image is technically new. Take, for example, a photo of something very well known, like the Statue of Liberty. That photo of the statue has a digital fingerprint. Now, say you zoom in, zoom in some more, and zoom in again to change the shading of one pixel by 0.1%. That change is likely imperceptible to the human eye. However, the fingerprint of that photo is now completely new, meaning the hashing technology doesn’t recognize it as the same photo with just that one pixel difference.
Previously, under traditional hashing technology, making that one pixel difference to a photo known to be CSAM would mean it would go undetected by the tech. However, classifier technology, which evaluates what an image contains rather than matching it to a known file, is now essential to catching content that would otherwise slip through entirely.
For parents, Stroebel’s message is urgent and unambiguous. The conversation cannot wait, and it must go further than old warnings. If a child comes forward, the first response cannot be skepticism: “Our job is, ‘Are you safe, and how do I help you move through to the next step?’”












