AI-Generated Abuse Is Rising. Are You Ready?

You may have heard that artificial intelligence isn’t always our friend. Well, it turns out that it and its users are definitely not so friendly to our children.

According to WeProtect Global Alliance’s 2023 Global Threat Assessment, child sexual abuse material (CSAM) has increased 87% online since 2019, with 32 million CSAM reports analyzed in 2022 alone. Another report from the UK-based Internet Watch Foundation suggested that the volume of AI-generated CSAM across the internet has also increased. During the 30-day review, a total of 3,512 explicit images and videos created with AI were found, a 17% increase from a similar review conducted in the fall of 2023.

Oh, but that’s not all that AI is accused of. According to NBC News, AI users are in some sense performing double task in the abuse department.

Horrible people with the right technology can create nasty deepfakes with just a single photo or short video of a person in your life. But the ability to create completely realistic deepfakes from scratch is far from perfect. So for something that is truly lifelike, foul techs use old footage of Real abuse plucked from the dark corners of the web and mixed with new faces. So not only are Today Children are being abused, but victims of abuse from decades ago – adult survivors – are being pulled back into that toxic web.

It’s all so disgusting and corrupt, and it only seems to get worse as AI gets better and better.

“Realism is improving. Seriousness is improving. It’s a trend we don’t want to see,” said Dan Sexton, Chief Technology Officer of the Internet Watch Foundation.

Of course, the above begs the question: if we somehow unit of measurement How much of this AI generated crap is being produced, why can’t we just stop it all? People get locked out and kicked off Big Tech platforms all the time, right? Why not this?

Well, experts say it’s because abusive materials are a different beast. And so are their creators. Social platforms regularly work to eliminate such illegal material, and law enforcement is always on the lookout for tips on its whereabouts. But the bad guys and their bad stuff like to hide in dark corners.

These deepfakes, along with the original footage they use, are often stored on servers in foreign countries where there are no laws against them or where local authorities are ill-equipped to deal with them. Many child abuse distributors exploit security holes in social media that allow them to post videos ‘privately’ (thereby avoiding auto-detectors of explicit material) and then share login details with CSAM consumers.

Then there’s the dark web, a part of the internet that’s hidden from traditional search engines and bounces connections back and forth in untraceable patterns. It’s like a damp alleyway hidden in the cloud somewhere.

The only absolute solution to block this kind of material would be to shut down the internet altogether or take a cue from a country like China and create a nationwide firewall of iron-clad government control. Of course, even with the best intentions, you can imagine how quickly that last option could have a terrible outcome in itself.

So what do we do with these offensive videos that we can’t seem to get rid of? Even synthetic, imperfect videos of teens can be used for cyberbullying. Fake explicit images have been used for sextortion and blackmail. We have even seen stories of teenagers being driven to suicide.

What needs to be done?

Well, it can be helpful to keep an eye on news reports, research, and suggestions from abuse support organizations. It never hurts to be as informed as possible when making decisions for your family.

However, the first step for your family is personal Safety is probably the easiest: limit the number of photos and videos you or your children post online. Even if your accounts are set to private, WeProtect’s report found that 60% of online abuse cases involved perpetrators known to the child. Bad actors can use various AI technologies to alter innocent family photos and videos, and they don’t care whether those photos come from your child’s social media account…or yours.

Next, work on understanding the apps, social networks, and online services your older children use. Just take a look at YouTube and see how easy it is to to make a deepfake. (Not that I recommend you try it yourself.)

Then talk to your children early and often about this new world we live in and the dangers that exist. Talk about online safety and how that takes precedence over sharing photos from the weekend pool party.

Finally, create a home environment where all of your family members feel safe talking to you about anything they encounter in this space. If they are the victim of abuse or a scam, or an unexpected photo lands in their DMs, you want them to feel comfortable enough to share it and talk about it.

Hey, sexual abuse may not be an easy thing to talk about for everyone at the kitchen table. But you can bet AI won’t be shy about it.

More resources from Focus on the Family:

You May Also Like

More From Author