How Deepfakes Are Radically Changing the Creator Landscape

Deepfake technology and the malicious use of AI are causing widespread fear, especially as the November U.S. election approaches. Adobe Chief Strategy Officer and EVP of Design and Emerging Products, Scott Belsky, explains what deepfakes mean for the media and creator landscape, and how developers like Adobe are pioneering new ways to authenticate human-generated content for everyday consumers.

This is an abbreviated transcript of an interview from Rapid Response, hosted by the former editor-in-chief of Fast company Bob Safian. From the team behind the Masters of scale podcasting, Quick response features candid conversations with today’s top business leaders tackling real-time challenges. Subscribe to Quick response wherever you get your podcasts, so you never miss an episode.

In recent years, environments have become a bit more complex with the rise of generative AI like Dall-E, Midjourney, and Adobe’s Firefly. Several videos have gone viral in the past year from designers who have expressed concern that they are being replaced by AI.

Yeah, in some ways history rhymes, right? If you look back and you see the rise of the digital camera, remember that digital photographers weren’t even allowed into many of the American Photography Associations. And even further back and people who used to do family portraits as painters were terribly offended by the idea that you could click a button and suddenly take a family portrait with this new thing called film, right? Technology has always unleashed more creative possibilities.

So many of our customers tell us that they spend most of their time in our tools doing mundane, repetitive work. And they’re always asking us for features that help them be more productive and expressive creatively and do less of the tedious stuff. How can we not use this technology to do that? And at the same time, how do we train these models? And how do we make sure that people’s IP is protected? How do we make sure that contributors are compensated? We’ve tried to be as high-level as we can on those fronts. And at the same time, there are people who haven’t started playing with the technology yet, and we’re trying to get them to experiment with it.

One of the most worrying applications of AI is deepfakes: things that look and sound real, but aren’t. New York Times had a recent article, I don’t know if you’ve seen it, but about scammers deepfaking Elon Musk. And we’ve seen reports of schoolchildren creating fake characters of classmates, superimposing the heads of fellow students onto naked bodies. These risks are real.

I like to say that we’re moving from an era of trust but verify to an era of verify and then trust. It’s a new world. Adobe has tried to lead the charge, but it’s an open source, non-profit consortium that’s all about the Content Credentials Movement. And the idea is that good actors, people who want to be trusted, can actually add credentials to their content so that they can show how it was created. You can say, this is the model that was used. These are the adjustments that I made. And you can look at the content and say, “Hey, you know, do I feel like I’ve verified this media enough to trust it or not?” I think that’s going to be the standard going forward.

As an entrepreneur, it sounds quite tricky, because on the one hand you say you have a responsibility: you’re trying to get Silicon Valley to look at the risks and address them. And on the other hand, you know that if you don’t make these tools available quickly, someone else will. And so how reticent can you be to stay on top of things? I think about how OpenAI released ChatGPT, partly because they had nothing to lose and people like Google who were delayed because of the risks, and then had to play catch-up. So you’re balancing that.

Well, we are. And I encourage our teams to anchor to our customers. And here’s the thing: If you talk to the average customer, they’re excited about superpowers that will make them more productive and help them grow their careers. Of course, there are people who are like, “Hey, you know, burn the boats and let’s just do what’s here,” and “Who cares how it’s trained or whatever.” And then there’s another group on the other end of the spectrum that’s like, “Don’t do AI, Adobe.” Like, “Just don’t do it. Just ignore it. Why would you need to use this technology? We don’t want anything to change.” But the average is a big group in the middle that is very pragmatic and accountable about this technology, and those are the people that we’re constantly taking the pulse of and anchoring ourselves to, and that also feeds the business.

And it has the largest customer base.

100%.

In the political sphere, we recently had Donald Trump claim that footage of Kamala Harris rallies was generated by AI. Is it likely that we’ll see even more discussion and conversation about deepfake activity during the election? And is that a distraction or is that, like, significant?

Whether it’s the internet, whether it’s Bitcoin, these are all technologies that were used very heavily for illicit purposes. This technology is no different. And we’re certainly going to see early use cases that are concerning.

I think the most important thing, if you step back, is that we talk about the use cases that we do, you know, even though it’s sensational right now to write articles about, “Oh my God, it’s a deepfake!” It’s actually good because it popularizes the fact that you shouldn’t trust everything you see anymore. Or when you hear these stories about someone getting a phone call from their grandma, it’s their grandma’s voice asking for money. It’s very scary, but the answer is not to stop the use of the technology. The answer is to make sure people know how it can be abused and then develop safeguards.

The producer of this show asked the design team at our company what I should ask about Adobe. The topic they were most interested in was copyright rules. How will copyright evolve in this AI world?

The concerns and questions that creatives are dealing with are different now than they have ever been in my career.

I’m going to be proactive about this debacle that we had with the update to our Terms of Service, which hadn’t changed in 11 years. We had made an adjustment around when content is scanned for child sexual exploitation images. But the change that was made was paraphrased by our legal team and a pop-up that customers got when they had to accept the Terms of Service that said, “Adobe has updated its content moderation policy.” Which caused some customers to be concerned about, “Oh, Adobe is looking at my work.” We were basically doing what we were legally required to do. But when they came in and saw the limited license that Adobe was using to modify your content to work across devices, these are the kinds of normal things that anyone in tech knows are par for the course.

But customers suddenly said, “Wait a minute, limited license? Does that mean you train on our content?” We’ve never trained on our customers’ content. We’ve always been very, very clear about how our models were trained for generative AI and our compensation program for those who have content that’s part of Adobe inventory that’s being trained on, and so on.

But the lesson I’ve learned is that every company now needs to look back and re-examine their policies in this area to make sure they’re taking into account the concerns of the modern designer.

We went through the entire terms of service, we took notes on them. We said exactly what we will and won’t do. Typically, companies don’t say what they won’t do in their terms of service. Typically, they just say what they will do, or what they’re asking customers to agree to. We were explicit about what we won’t do. And I’m actually really proud of it. I would say it’s probably the most creator-friendly terms of service on the internet right now, as far as tools companies are concerned. But it was a wake-up call. And so I think that, to your point about copyright and copyright law, we need to be very proactive now. Legal is now a really important part of everyone’s strategy to make sure you’re doing the right thing.

You May Also Like

More From Author