Professor Myuhng-Joo Kim emphasizes the importance of being aware of fake news generated by AI. Photo by Fatin Alya
Professor Myuhng-Joo Kim emphasizes the importance of being aware of fake news generated by AI. Photo by Fatin Alya

With the upcoming general elections on April 10, controversies surrounding deepfake technology have escalated as more than a hundred electionrelated materials were reported to employ such methods. This resurgence of public concern echoes back to 2022, when a fabricated video of President Yoon Suk Yeol endorsing a candidate went viral on the internet, promoting false electoral information.

 

The integration of artificial intelligence (AI) in advancing technologies has also led to a surge of cases centering on deep-learning-based synthetic media, both locally and worldwide. For instance, earlier this year, social media platform X witnessed the spread of deepfake pornography images of Taylor Swift, raising concern about sexual exploitation created by the technology. Yet, this is only one of many international examples.

 

Paul Kim, head of the AI team at GMDSOFT, a company that empowers investigative agencies globally through research and development of mobile and digital forensic tools, shared his expertise on the issue.

 

“Deepfake is widely used across various sectors, especially in digital facial composition domains,” Kim explained. “However, its misuse can cause significant harm, ranging from invasion of private life to unimaginable losses at political and societal levels. The rapid evolution of deepfake has made it increasingly difficult to be distinguished with just human eyes, which is why digital forensics plays a crucial role in the investigation.”

 

According to Kim, the digital forensic field heavily emphasizes on determining the authenticity of discovered evidence. The AI model is trained to study the traces left behind by media produced using deepfake. In some cases, AI is used to analyze particular details such as the incongruity in one’s facial blood flow or the background element of a video. Although face detection and deepfake detection are two different methodologies, GMDSOFT has developed a model that merges the two.

 

“We are expecting the model to be fully implemented into our products, MD-VIDEO AI and MD-RED, by the second half of the year, to enhance mobile and video analytics features,” Kim claimed.

 

Models to combat other forms of deepfake manipulation such as voice phishing are still in the developmental stage due to the limitations of AI in replicating the laws of nature. Nevertheless, Kim believes that regardless of technological advancements, nothing can ever beat the power of truth.

 

Beyond AI experts, many other professionals in various fields, including those in the AI ethics and politics field, expressed their worries over the danger of deepfake abuse in upcoming elections.

 

Professor Myuhng-Joo Kim from the School of Information Security at Seoul Women’s University has continued his research and guideline development on AI ethics as the director of Right AI Research Center, Expert member of the Global Partnership on Artificial Intelligence, and Chair of the AI Ethics & Policy Forum of Korea.

 

“When thinking of AI ethics, people often think of the ethics required of technicians and entrepreneurs, but I think the core stakeholder is the citizen,” Professor Kim said. “I have been emphasizing the ethics of society through Seoul PACT (Publicness, Accountability, Controllability, and Transparency), which is the principles of AI usage, and I believe that ethical AI can be developed through the moral usage of citizens.”

 

Professor Kim explained that even if technicians and scientists develop AI ethically and regulate its proliferation, they cannot predict how it will be used in reality. This makes it crucial for citizens to have ethical awareness of technology and participate in public debate on AI ethics.

 

As one of the efforts to eradicate deepfake abuse in the upcoming general election, last October, the Public Official Election Act was reformed to punish those who produce, edit, and distribute fake content using AI. Moreover, the National Election Commission (NEC) established a regulation for election campaigns using generative AI and started operating an AI-specialist team to monitor illegal AI-generated content during the election campaign period.

 

Professor Kim shared that many people may be punished as this is the first year of the law’s application. Furthermore, the current Public Official Election Act was revised by political parties agreeing not to use generative AI for their campaigns. Still, the law is unable to sanction enthusiastic individual supporters of politicians as it is solely about the actions of political parties and politicians, leading Professor Kim to stress that citizens must be vigilant against deepfake content.

 

In particular, Professor Kim mentioned that since the NEC now operates an AI-specialist team, and political parties actively monitor for false AIgenerated information, such information cannot be proliferated on a large scale. However, he stressed that if individual supporters spread false information using deepfake a day or two before the election, it is impossible to regulate it as it takes time to detect, analyze, and sanction such content. This makes the efforts of voters to distinguish fake news even more important.

 

As a way for voters to filter out false information made by deepfake related to the election, Professor Kim proposed the 10-10 rule. The rule suggests that the receiver of information should not look at the photos or videos for the first 10 seconds, but rather think about the components beyond the photographic information such as the sender or source of the content. After looking at the picture, the rule suggests the receiver take another 10 seconds to objectively think about the credibility, apart from his or her political opinion.

 

Professor Kim stated that the collapse of democracy due to AI is a problem that experts around the world are concerned about, and emphasized that it is time for citizens to sympathize with this issue of AI.

 

“Many countries are interested in how South Korean society, which is a country well known as an information technology powerhouse, deals with AI issues in this election,” Professor Kim said. “This election can demonstrate the possibility of AI shaking up democracy, and I hope that many citizens are aware of its importance.”

저작권자 © Ewha Voice 무단전재 및 재배포 금지