Generative AI and Deepfakes – The Future of Fraud and Extortion

Deepfake fraud using Mr. Lee Hsien-loong (Source: Lee Hsien-loong Facebook, https://www.facebook.com/leehsienloong/posts/you-might-have-seen-a-deepfake-video-of-me-asking-viewers-to-sign-up-for-an-inve/995860535233409/ )

As new technologies become more widely adopted in Asia, they are increasingly being used to commit fraud and extortion. The growing use of generative AI to create realistic digital images, such as ‘deepfakes’, and the expansion of unregulated cryptocurrencies for illicit payments have the potential to create a tsunami of new fraud and extortion, fueled by new technologies. Given that Asia is facing a major fraud pandemic, driven by industrial-scale criminal enterprises in Cambodia, Myanmar and the Philippines, the greater use of generative AI is likely to help transnational organised crime groups expand even further.

Fraud is criminal deception for financial gain. Generative AI can provide new fraud tools that allow criminals to more easily deceive victims and evade detection. Such generative AI tools can include “deepfakes,” which are artificially created video or audio recordings designed to impersonate real people for the purpose of fraud or extortion. Deepfakes and other tools can be used for “synthetic identity fraud,” which combines real data with high-quality fabricated information to create fake identities that can be used to make unauthorized transactions. AI apps can create even higher-quality phishing emails and text messages that are indistinguishable from real banking communications. Long gone are the days of highly amusing “Nigerian letters” that used unbelievable and poorly crafted language to trick victims into committing advance-fee fraud.

The increasing use of deepfakes should be a cause for great concern. American company Security Hero reported in ‘2023 State of Deepfakes’ that the total number of deepfake videos online will reach 95,820 in 2023, a 550% increase from 2019; that deepfake pornography will make up 98% of all deepfake videos online; that 99% of people targeted in deepfake pornography are women; and that South Korean singers and actresses make up 53% of people featured in deepfake pornography, making up the most targeted group.

Creative Korean Deepfakes

Korea is a leading Asian country in using new technology for business purposes, but generative AI also has a societal impact and is increasingly being used for criminal purposes. The Korean National Police Agency announced last week that they are currently investigating 513 cases of deepfake sex crimes. The number of reported cases has increased from 180 in 2023, 160 in 2022, and 156 in 2021. Most of the victims, 62%, are teenagers, and of the 318 suspects arrested by Korean police this year, almost 80% were teenagers. The increasing number of criminal cases and concentration among young people is a major concern in Korea.

The deepfake problem in Korea involves the widespread use of AI deepfake chat rooms on the Telegram app. The outcome of easy access to high-speed internet and the growing proliferation of AI apps has led to a preponderance of young men and boys using selfie images, mainly of girls, to create deepfake pornographic videos. The problem of Telegram being abused to share deepfake pornographic videos has become so severe that earlier in September Telegram’s East Asia representative apologizes due to miscommunication over the issue, removed 25 pieces of sexually exploitative material and created a hotline to answer questions from Korean authorities.

Korean authorities’ concerns that deepfake could be used more broadly to target others in society were illustrated when the ministry of defense recently announced that intranet photos of soldiers, military personnel and Ministry of Defense officials are no longer available on the military’s Onnara System intranet and on military unit websites due to concerns that they could be misused to create sexually abusive deepfake images. Previous investigations have shown that military intranet photos are being used for deepfakes. The misuse of personal images to commit extortion is not new in Korea.

In November 2019, 25 years old Cho Ju-bin was found guilty on charges related to masterminding pay-to-view chat rooms on Telegram where users paid cryptocurrency to watch sexually exploitative content of women who were blackmailed into participating. Cho was eventually sentenced to 45 years in prison. The case widened when investigations uncovered multiple chat rooms.

In May 2020, Moon Hyung-wook, 24, was arrested and accused of forcing 21 women and girls to share nearly 3,800 sexually explicit videos of himself for distribution on a sexual exploitation chatroom called Nth Room on Telegram. The Nth Room and other similar Telegram chatrooms were operated as an online pay-to-view club that offered offensive sexual videos of women and underage girls. They are believed to have had around 260,000 users. Moon, whose Telegram username was ‘God God’, was sentenced to 34 years in prison.

Moon Hyung-wook, mastermind behind deepfake criminals (24 years old) (Source: Alamy)

Unique Singapore Deepfake

In December 2023, a video of Deputy Prime Minister Lawrence Wong promoting an investment scam went viral on social media. The video featured the logo of The Straits Times, a long-standing renowned news publication in Singapore, with Wong stating that “Dear Singaporeans, the day has come. I am pleased to introduce you to the Quantum Investment Project. Starting November 11and 2023 we launched a project that allows everyone to receive guaranteed monthly dividends. With minimal investments, the automated trading system performs only the most profitable trades, allowing investors to increase their income. Earnings from $8,000 are absolutely real. The company has made sure that this income is available to everyone. That is why we have lowered the minimum investment amount to $250. All payouts are guaranteed by me personally and my reputation.”

The video was a very high-quality deepfake that contained hyperlinks to websites seeking investment in investment schemes with guaranteed returns. The targeting of high-ranking government officials widely trusted by the public continued in June 2024 when Senior Minister Lee Hsien Loong posted on Facebook to say that he was also being used in deepfake fraud, saying that “You may have seen a deepfake video in which I ask viewers to sign up for an investment product that claims to have guaranteed returns. The video is not real! AI and deepfake technology are getting better every day. In addition to mimicking my voice and overlaying the fake audio onto real footage of me delivering the National Day Message last year, scammers even synced my mouth movements to the audio. This is extremely disturbing: people watching the video could be fooled into thinking I actually said those words.”

Remember, if something sounds too good to be true, beware. If you see or receive scam advertisements from me or any other Singaporean public official promoting an investment product, do not believe them.”

The deepfake videos of Deputy Prime Minister Wong and former minister Lee were of such exceptionally high quality that no average person would be able to distinguish them from real footage unless they looked at the context (why would such a high-ranking government official be recommending financial products?).

Concerns that the criminal use of deepfakes could have a political impact, for example by undermining public trust in civil servants, have led to Singapore government to propose new law banning deepfakes and digitally manipulated content during electionsThe Protection from Online Falsehoods and Manipulation Bill proposes to ban the publication of digitally generated or manipulated online election advertisements that realistically depict a candidate saying or doing something he or she has not in fact said or done. The new law only applies to AI-generated misinformation that misrepresents election candidates and applies until the polls close.

Everyone looks real

In February 2024, British firm Arup’s Hong Kong office is said to have lost HK$200 million (US$25.6 million) in a fraud after an employee was tricked by a digitally faked version of his finance director ordering money transfers in a video call. A Hong Kong employee received a phishing email in January, allegedly from Arup’s finance director in London, telling him to make a secret transaction. The Hong Kong employee joined a video call with others he thought were Arup’s senior management, but who were digital recreations using deepfake technology.

Identity verification company Sumsub claims its customer data shows AI-driven deepfake fraud is the fastest growing form of identity fraud. According to Sumsub“In 2023, deepfake technology will continue to pose a significant threat to identity fraud. The widespread accessibility of this technology has made it easier and cheaper for criminals to create highly realistic audio, photo and video manipulations that trick individuals and fraud prevention systems.”

The use of generative AI for fraud is not unique to Asia and is growing around the world. But in Asia, there is a transnational organized crime infrastructure that has generated billions of dollars in criminal proceeds over the past decade, enabling the development of a resilient criminal infrastructure that national law enforcement agencies cannot defeat. This criminal infrastructure has been established in Cambodia, Myanmar, and the Philippines, using banking facilities in financial centers such as Hong Kong and Singapore. While law enforcement actions against parts of this transnational organized crime infrastructure have grown, the vast proceeds of crime have already been diverted into other legitimate investments that can be used to mask new criminal enterprises.

To address this threat, law enforcement agencies need a step change in partnership with new multinational task forces targeting fraud with advanced technology. The private sector needs a transformation in its approach to financial crime prevention with much smarter use of AI technology that has a real impact on preventing crime and protecting consumers (and not just making it harder for consumers to bank). Not only does the future of fraud lie in AI and deepfakes, but so do some of the solutions.

You May Also Like

More From Author