Experts Discuss Rising Threat Of Deepfakes At International Government Communication Forum
Experts at the 13th International Government Communication Forum (IGCF 2024) addressed the risks posed by deepfake risks on the second day of the event and discussed what governments can do to increase awareness and educate people to avoid deepfakes.
Reports say that deepfakes will cost the world over US$10 trillion by 2025, posing challenges for various institutions, including hospitals and governments.
A session titled "Why Resilient Governments are Building Protective Shields with Artificial Intelligence" at the IGCF 2024 highlighted the fear of deepfakes — images, videos, or audio altered or created with artificial intelligence — and how it makes people doubt the authenticity of content.
The session's panel included Hector Monsegur, founder of a cybersecurity startup, security researcher and Director of Research; Nader Al Gazal, academic and expert on AI and Digital Transformation; Alan Smithson, Co-Founder of Metaverse (Facebook); and Dr. Inhyok Cha, Professor at Gwangju Institute for Science and Technology and Deputy President for Global Cooperation (South Korea).
The panelists discussed how deepfakes are a significant issue because anyone can create them online for free. Smithson highlighted that technology itself isn't inherently good or bad, and it depends on how it's used. He emphasized that while creating tools is possible, it's crucial to use them responsibly, and governments need to ensure technology is used for positive purposes.
"Technology like deepfakes enable this and because of its accessibility. It allows millions of people to create their own stories in ways that have never been done before. This is where the question of ethics comes into play. Governments must invest heavily in raising awareness and training people to avoid deepfakes," Dr. Cha was quoted as saying by WAM.
"Deepfakes are a very serious matter and must be dealt with effectively. But we can also leverage AI and virtual assistants to help regulate deep fakes despite their rapid growth," Al Gazal added.
Monsegur pointed out that there are currently no tools to detect and prevent deepfake attacks in real time. "At some point, each and everyone will be breached by deepfakes. But if you set mitigating controls in place prior to the breach, you're going to limit the damage," he said.
He recommended creating a multi-factor authentication tool for social media platforms like WhatsApp. This tool would let users click on a friend's name to send an authentication request during a call, helping to verify if the person they are speaking to is genuine.
Smithson mentioned that the U.S., EU, and China have begun to regulate artificial intelligence (AI). Recently, California lawmakers passed a law banning deepfakes to protect workers and regulate AI.
The law prohibits deepfakes related to elections and requires big social media platforms to remove deceptive content 120 days before and 60 days after an election. Campaigns must also disclose if their ads or materials have been altered by AI.
Furthermore, Smithson noted that AI-generated content must be clearly labeled.
© Copyright 2024 IBTimes AE. All rights reserved.