Police use 14-year-old girl with AI technology to lure sex criminals

Is this really the best way to protect children?

New methods

A lawsuit filed by the state alleges that police officers in New Mexico used an AI-generated image of a fake teenage girl to attract pedophiles.

The lawsuit was filed last week in New Mexico against social media app Snapchat, which prosecutors say has failed to “protect children from sextortion, sexual exploitation and harm.” As highlighted by Ars TechniqueThe case file shows that part of the police “undercover investigation” involved the creation of a “fake Snapchat account for a 14-year-old girl named Heather” by state Justice Department officials.

As “Heather,” agents found “messages and exchanged messages” with accounts belonging to apparent pedophiles, with disturbing usernames like “child.rape” and “pedo_lover10,” the filing says. Historically, as Ars comments, police conducting similar investigations would use images of younger-looking adult women — often police officers — to convince child abusers that they were speaking to a real teenage girl. But in this case, officers used an AI-generated image of a sexualized 14-year-old to convince the perpetrators that Heather was, in fact, the real thing.

According to the agents, the tactic worked: Many of the accounts the agents communicated with were tricked by the AI-generated photo and attempted to trick “Heather” into sharing explicit sexual images or child sexual abuse material (CSAM) with them.

But while this research was successful in revealing the disturbing, dark reality of Snapchat’s algorithms, Ars notes that the agents’ use of AI raises new ethical questions. For example, AI-generated CSAM is already on the rise — so should the government really be making more of it, even if it’s fake?

“Of course it would be ethically concerning if the government were to create deepfake AI child sexual abuse (CSAM) material,” said attorney Carrie Goldberg, who rose to prominence representing several victims of Harvey Weinstein sexual abuse. Ars“because those images are illegal and we don’t want any more CSAM material circulating.”

C(AI)tch-22

There are also ethical questions about the AI ​​training datasets that police relied on in their efforts.

To generate fake images of children, an AI model must be trained on photos of real children. It’s hard to argue that a child could give their full consent to their image being used for AI training in the first place — a question that becomes all the more serious when AI is used to generate sexualized or otherwise harmful images of them.

Elsewhere, Goldberg warned on a practical level Ars that using AI-generated photos of fake children could be a useful tool in the defense of perpetrators to lure them into a trap.

All in all, the researchers’ use of AI is a catch-2022 for law enforcement. On one hand, the lawsuit alleges, predators have taken the bait. But if the goal is to protect real children, repeating images of real children in sexualized, AI-generated images of fake children feels a far cry from total protection.

More about AI and children: AI is trained on images of real children without consent

You May Also Like

More From Author