AI-generated child abuse material: insights from discussions on Dark Web forums

By Dr. Deanna Davy, Senior Research Fellow at the International Policing and Public Protection Research Institute.

September 2024

Headlines and social media feeds are abuzz with stories about the potential positive effects of artificial intelligence (AI) on society. But little attention is paid to the harmful effects of AI on society. A major concern, one that requires much more scientific attention, awareness and deterrence, is the use of AI to create child sexual abuse material (CSAM).

Agencies such as WeProtect Global Alliance and the Internet Watch Foundation (IWF) raised the alarm about AI CSAM in 2023, highlighting it as an area of ​​concern for governments, civil society organizations, private sector agencies, and parents and children. IWF’s 2023 research found that offenders are taking images of children, often celebrity children, and applying deep learning models to create AI CSAM.

There are currently two main categories of AI CSAM: (1) AI-manipulated CSAM, where images and videos of real children are altered into sexually explicit content, and (2) AI-generated CSAM, where entirely new sexual images of children are fabricated (Krishna et al., 2024). IWF reported in 2023 that both types of AI CSAM are common.

Researchers from the International Policing and Public Protection Research Institute (IPPPRI) wanted to learn more about what Dark Web forum members were saying and doing in relation to AI CSAM. To do so, we decided to examine Dark Web forum members’ posts and discussions about AI CSAM. We collected this data using Voyager, an open-source intelligence (OSINT) platform that collects, stores, and structures content from publicly accessible online sources, including Dark Web forums where CSAM-related discussions take place. The data collection was conducted in January 2024 using a keyword search on Voyager’s ‘Atlas’ dataset of Dark Web child sexual exploitation (CSE) forums. Our search using the search terms “AI” OR “Artificial intelligence” and the search date range of 2023 (12 months) returned 19,951 results (this included 9,675 links; 9,238 posts; 1,021 threads; and 17 forum profiles). Next, we looked at an example to do a preliminary analysis of what forum members said and did regarding AI CSAM.

What we discovered is very disturbing. First, there is a real appreciation and interest in AI CSAM. Forum members refer to those who create AI CSAM as “artists.” What forum members appreciate is that those who create AI CSAM can take an image of, say, a favorite childhood movie character and create a plethora of AI CSAM of that child. At the moment, forum members are most interested in AI CSAM of famous children, such as child actors and child sports stars.

We found that forum members who create AI CSAM are not IT, AI, or machine learning experts. They teach themselves how to create AI CSAM. They have easy access to online tutorials and guides on how to create AI CSAM; these resources are widely shared on Dark Web forums. They then reach out to other Dark Web forum members who already have experience creating AI CSAM to ask questions about how to “train” the software and how to overcome challenges they experience (such as the child in the image with too many limbs or fingers). As part of this effort to improve their AI production skills, forum members actively ask others to share CSAM so they can use this material to “practice.”

We also found evidence that as forum members develop their skills in producing AI CSAM, they actively encourage other forum members to do the same. This is particularly concerning because it can fuel demand and lead to a perpetual upskilling loop where as more forum members view AI CSAM and become interested in creating AI CSAM themselves, they then sharpen their AI skills, share their AI-produced CSAM, and encourage others to create and share AI CSAM.

We also found that some forum members are already moving from creating what they describe as ‘softcore’ AI CSAM to more ‘hardcore’ material. This pattern may be driven by the normalisation and desensitisation of material and the search for more explicit and violent material.

It was also clear that forum members hope that AI will continue to develop rapidly, so that in the near future they will no longer be able to tell whether a sexual image of a child is real or not. They also hope that AI will develop to a point where they can create increasingly hardcore and interactive material (such as interactive videos where they can instruct a video character to perform sexual acts).

On the day we published these findings, a man was convicted, in a landmark UK case, of creating more than 1,000 sexual images of children using AI. Our analysis of discussions about AI CSAM on the Dark Web suggests the convicted individual is just one of many committing such crimes.

This is not a niche area – on the contrary, the creation of AI CSAM is on its way to the mainstream. That is why we need a swift and unyielding response. The cat is already out of the bag, so to speak, with regard to AI CSAM. Perpetrators are incorporating this tool into their toolbox.

Our task now is to limit the spread of the phenomenon through legal reform, strong deterrence measures, gathering more evidence and raising awareness of the problem.

Dr Deanna Davy is a Dawes Senior Research Fellow at the International Policing and Public Protection Research Institute. Deanna has conducted research into human trafficking and child sexual exploitation for a number of government, international and non-governmental agencies, including the United Nations Office on Drugs and Crime, the International Organization for Migration, the United Nations Children’s Fund and ECPAT International. Before joining the IPPPRI team, Deanna was a Research Fellow (Modern Slavery) at the University of Nottingham.

You May Also Like

More From Author