Google, Apple, Discord Let Malicious AI ‘Unmask’ Websites Using Their Login Systems

Major tech companies including Google, Apple and Discord have enabled people to quickly sign up for malicious “undress” websites, which use AI to remove clothing from real photos, leaving victims looking “naked” without their consent. More than a dozen of these deepfake websites have been using login buttons created by the tech companies for months.

A WIRED analysis found that 16 of the largest so-called undress and “nudify” websites use the login infrastructure of Google, Apple, Discord, Twitter, Patreon and Line. The approach allows people to easily create accounts on the deepfake websites, giving them a false sense of credibility before paying for credits and generating images.

While bots and websites that take non-consensual intimate images of women and girls have existed for years, they have grown in number with the introduction of generative AI. Such “undressing” abuse is alarmingly widespread, with teenage boys reportedly taking images of their classmates. Tech companies have been slow to address the scale of the problem, critics say, with the sites appearing high in search results, paid ads promoting them on social media and apps appearing in app stores.

“This is a continuation of a trend that normalizes sexual violence against women and girls by Big Tech,” said Adam Dodge, attorney and founder of EndTAB (Ending Technology-Enabled Abuse). “Opt-in APIs are tools of convenience. We should never make sexual violence an easy act,” he said. “We should be building walls around access to these apps, and instead we’re giving people a drawbridge.”

The sign-in tools analyzed by WIRED, which are deployed via APIs and common authentication methods, allow people to use existing accounts to join the deepfake websites. Google’s sign-in system appeared on 16 websites, Discord’s on 13 and Apple’s on six. The X button appeared on three websites, while Patreon and messaging service Line both appeared on the same two websites.

WIRED is not naming the sites because they are open to abuse. Several are part of broader networks and owned by the same people or companies. The login systems were used despite tech companies generally having rules that prevent developers from using their services in ways that could harm, harass or invade people’s privacy.

When contacted by WIRED, spokespeople for Discord and Apple said they had removed developer accounts associated with their sites. Google said it would take action against developers if it found violations of its terms. Patreon said it bans accounts that allow explicit images, and Line confirmed it was investigating but said it could not comment on specific sites. X did not respond to a request for comment about how its systems are used.

In the hours after Jud Hoffman, Discord vice president of trust and safety, told WIRED that it had terminated the sites’ access to its APIs for violating its developer policies, one of the undressed sites posted in a Telegram channel that authorization via Discord was “temporarily unavailable” and said it was working to restore access. That undressing service did not respond to WIRED’s request for comment about its activities.

Since the rise of deepfake technology in late 2017, the number of non-consensual intimate videos and images has grown exponentially. While videos are harder to produce, the creation of images using “undress” or “nudify” websites and apps has become commonplace.

“We need to make it clear that this is not innovation, this is sexual abuse,” said San Francisco City Attorney David Chiu, who recently filed a lawsuit against undressing and nudity websites and their creators. Chiu says the 16 websites targeted in his firm’s lawsuit received about 200 million visits in the first six months of this year alone. “These websites are engaged in horrific exploitation of women and girls around the world. These images are used to harass, degrade and threaten women and girls,” Chiu alleges.

The undress websites operate like businesses, often in the shadows, and proactively provide very little detail about who owns them or how they operate. Websites run by the same people often look the same and use nearly identical terms and conditions. Some offer more than a dozen different languages, demonstrating the global nature of the problem. Some of the Telegram channels linked to the websites have tens of thousands of members each.

The sites are also constantly evolving, posting regularly about new features they’re producing—one claims their AI can customize how women’s bodies look and allow “uploads from Instagram.” The sites typically charge people to generate images, and may run affiliate programs to encourage people to share them; some have banded together to create their own cryptocurrency that can be used to pay for images.

A person identifying himself as Alexander August and the CEO of one of the websites responded to WIRED, saying that they “understand and acknowledge the concerns about the potential misuse of our technology.” The person claims that the website has implemented several safety mechanisms to prevent the creation of images of minors. “We are committed to taking social responsibility and are open to working with official bodies to improve the transparency, security, and reliability of our services,” they wrote in an email.

Tech company logins are often presented when someone tries to sign in to the site or clicks on buttons to generate images. It’s unclear how many people used the login methods, and most websites also allow people to create accounts using just their email address. However, of the websites reviewed, the majority had implemented more than one tech company’s sign-in APIs, with Sign-In With Google being the most widely used. When this option is clicked, prompts from Google’s system indicate that the website is getting people’s name, email addresses, language preferences, and profile picture.

Google’s sign-in system also reveals some information about the developer accounts associated with a site. For example, four sites are associated with one Gmail account; another six sites are associated with another. “To use Sign in with Google, developers must agree to our Terms of Service, which prohibit the promotion of sexually explicit content and conduct or content that defames or harasses others,” a Google spokesperson said, adding that “appropriate action” will be taken if those terms are violated.

Other technology companies that used login systems said they blocked accounts after being contacted by WIRED.

Discord’s Hoffman says that in addition to taking action against the sites flagged by WIRED, the company “will continue to pursue other sites that we find that violate our policies.” Apple spokesperson Shane Bauer says it has terminated multiple developer licenses with Apple and that Sign In With Apple will no longer work on its sites. Adiya Taylor, corporate communications lead at Patreon, says it bans accounts that provide access to or fund third-party tools that may produce adult material or explicit images. “We will take action against any works or accounts on Patreon that violate our Community Guidelines.”

In addition to the login systems, several websites displayed Mastercard or Visa logos, suggesting they could potentially be used to pay for their services. Visa did not respond to WIRED’s request for comment, while a Mastercard spokesperson said that “purchases of non-consensual deepfake content are not permitted on our network” and that it takes action when it detects or is notified of such instances.

Tech companies and payment providers have on multiple occasions taken action against AI services that enable people to generate non-consensual images or videos following media reports about their activities. Clare McGlynn, a professor of law at Durham University with expertise in the legal regulation of pornography and online sexual violence and abuse, says Big Tech platforms are enabling the growth of undress websites and similar sites by not taking proactive action against them.

“What’s concerning is that these are the most basic security steps and moderation that are missing or not enforced,” McGlynn said of the opt-in systems in place, adding that it’s “wholly inadequate” for companies to respond when journalists or campaigners highlight how their rules are easily circumvented. “It’s clear that they simply don’t care, despite their rhetoric,” McGlynn said. “Otherwise they would have taken these most basic steps to restrict access.”

You May Also Like

More From Author