Build an international body to stop deepfakes

Published: Sep 18, 2024, 7:58 PM

Updated: Sep 18, 2024, 8:01 PM

Hur Jung-yeon
The author is a reporter for JoongAng Sunday.

As the harm from deepfake videos spreads rapidly, the Yoon Suk Yeol administration is struggling to come up with countermeasures. After deciding on August 30 to punish not only the possession or purchase of deepfake clips related to sexual crimes, but also the viewing of them, the government announced on September 1 a comprehensive plan to tackle the increasing cyberattacks that use new technologies such as artificial intelligence (AI). The plan, jointly developed by 14 government ministries, includes increasing self-regulation by portal and platform operators.

Korea University Emeritus Professor Lim Jong-in, 68, says, “Excessive regulation that hinders AI innovation is not the only solution,” stressing that it can be more effective “to simultaneously push for autonomous regulation and active countermeasures.” Prof. Lim, who majors in cryptography, is one of Korea’s pioneering cybersecurity experts. He founded the Graduate School of Information Security at Korea University in 2000 and served as its dean for 15 years. After serving as a special adviser to the president on national security in 2015 during the Park Geun-hye administration, Lim was appointed special adviser to President Yoon Suk Yeol on cybersecurity in January 2024. Since then, he has been dedicated to preparing government-level measures to address the growing cybersecurity challenges. The JoongAng Sunday met him at the presidential office in Yongsan District to hear his views on the threats posed by deepfakes and AI.

Q. Could deepfake crimes be eradicated through self-regulation alone?
A. Deepfakes have gone beyond the scope of technical control. Due to the nature of cyberspace, people who create deepfakes can easily hide their identities. The best response now is to quickly understand the intrusion and respond to the situation quickly to prevent the damage from spreading further. The National Cybersecurity Basic Plan recently announced by the Yoon government also underscores this point. Deepfake crimes are no different. It is difficult to limit the damage by simply regulating platforms. We should first ask them to voluntarily regulate themselves, but at the same time, we should clearly define illegal content such as child sexual abuse material (CSEM) to increase the efficiency of responding to the new crimes.

What do you think about Telegram’s recent announcement that it wants to help the Korean government?
The 25 digital sex crime videos that the Korea Communications Standards Commission recently asked Telegram to remove were likely uploaded to official channels. But this is just the tip of the iceberg when you consider the numerous private chat rooms that are much more active than their official counterparts. Surprisingly, Telegram complied with our government’s request to remove deepfakes only if they violate the Public Official Election Act. That’s because our national laws on deepfakes only apply to the Election Act. In this case, they were removed within three days. Therefore, related laws should be passed as soon as possible to require the removal of deepfakes from portals and platforms like Telegram.

California, dubbed “the Mecca of Big Tech,” is also pushing for a draconian law on deepfakes. Under current law, producing deepfakes based on sexual exploitation is not illegal, even if they target minors. That’s because the First Amendment to the U.S. Constitution, which guarantees freedom of speech, makes it impossible to prosecute and punish offenders if the person in the deepfake is not a real person. But a bill passed by the California State Assembly in August would penalize deepfake CSEMs even if they don’t involve real people. The bill also exponentially tightened regulations on deepfakes used in elections. The European Union (EU) and the United Kingdom have responded harshly to deepfakes by revising one law after another.

How much damage is being done globally by deepfake crimes?
While sexually exploitative materials are a major problem for Korea, financial fraud caused by deepfakes has reached serious levels in the United States. The damage caused by deepfakes in America amounted to $12 billion last year. There are pessimistic predictions that losses from ransomware attacks will soon exceed $20 billion. Concerns are growing rapidly that deepfakes can be used for financial gain by impersonating high-ranking officials within an organization or by accessing a company’s security network to steal important information. However, those deepfake regulatory laws recently introduced in the United States and Europe need more additions, as the scope of their application is too broad and vague.

What is the current state of deepfake detection technology?
Earlier this year, when fake videos of former US President Donald Trump being taken away by police circulated, vulnerabilities such as the shape of the mouth not matching the spoken word or the comical presence of an extra finger stood out. But if you watch the latest videos created by generative AI and deepfake apps, you’d be hard-pressed to tell the difference. AI is said to double every six months. As excellent as current detection technology is, it will likely cease to be effective within a few months.

Korea University professor emeritus Lim Jong-in, who was appointed special adviser to President Yoon Suk Yeol on cybersecurity in January, speaks about the growing challenges of deepfake in an interview with the JoongAng on Sunday at the presidential office, Sept. 3. (KIM HYUN-DONG)

According to the National Police Agency, 297 deepfake crimes were reported from January to July. Of the 178 suspects arrested, 131, or 73.6 percent, were teenagers. Police are preparing to respond more severely by using the Youth Protection Act when the target of a deepfake is a child or adolescent. In an alarming development, 53 percent of victims of deepfake sexual exploitation worldwide were Korean, according to recent data released by a U.S. cybersecurity firm.

Shockingly, a large number of deepfake victims turned out to be Korean celebrities.
Since Korean idol stars have gained a lot of international attention due to the huge popularity of K-culture, they are easy targets for deepfakes. But the problem is that many of the perpetrators who have made these videos are teenagers. Because AI has developed so quickly, there has been almost no ethics education. As a result, teenagers are turning to criminals, either out of curiosity or as a joke. We urgently need to educate them about the pros and cons of AI before it’s too late.

On August 8, the United Nations (UN) Ad Hoc Committee on Cybercrime unanimously adopted a draft UN Convention against Cybercrime. The first cyber-related agreement at the UN level attracted considerable attention because it included provisions requiring every country to develop “criminal criminal rules” for online sexual crimes. The convention also required UN member states to establish standards for establishing a uniform legal system to meet the requirements for gathering evidence to respond to crimes. The agreement is expected to be adopted by the UN General Assembly later this month.

What role does Korea play in building global cooperation to prevent the further spread of deepfakes?
Just as the international community established the International Atomic Energy Agency (IAEA) to prevent nuclear proliferation in the 20th century, we must create an international body with stronger authority and solidarity to stop the threatening spread of cybercrime in the 21st century. Cybercrime knows no borders. Currently, Korea and the United States are conducting joint cyber exercises and actively exchanging experts. We plan to go beyond Korea-US cooperation to the global level in the future. To achieve this goal, it is equally important to quickly implement laws related to cybersecurity and AI. AI will soon be the norm in all fields. If we first present a law that can serve as a model for the rest of the world, we can seize an opportunity to move forward as a leading country not only in hardware but also in software.

As deepfake cases increase, public concern about the side effects of AI is also growing.
Although Korea has a high level of AI adoption among OECD member countries, positive recognition of AI is still lacking. AI itself is not bad. According to recent data, the economic impact that can be reaped from successfully introducing AI into domestic industries amounts to 300 trillion won ($225.2 billion) annually. A national AI commission chaired by President Yoon Suk Yeol and attended by 10 government ministers and representatives from academia and industries is set to be launched later this month. It is time for Korea to draft a balanced basic AI law that effectively covers both promotion and regulation of the AI ​​industry before it is too late.


You May Also Like

More From Author