Transparency tech, better global cooperation needed to fight deepfakes: Experts
Sign up now: Get ST's newsletters delivered to your inbox
A panel discussion on fighting deepfakes at the Asia Tech x Summit held at Capella Singapore, on May 31.
ST PHOTO: BRIAN TEO
Follow topic:
SINGAPORE – Technology that tracks where and how content is generated is crucial in the global fight against deepfakes, but governments and people also need to share and learn from one another to battle the problem, panellists at a global tech event in Singapore said on May 31.
Some of the innovations to combat deepfakes include a digital signature system in Sony cameras, which gives photographers proof of authenticity for their work, as well as metadata attached to a piece of content that tells users if it has been tampered with after creation.
Sony’s digital signature system certifies the authenticity of an image at the point of capture. A “digital birth certificate” is created and retained throughout rounds of edits. It also gives information on whether the image captured is that of a three-dimensional object, or if it is a photo of an image or video.
This allows journalists to prove that photos they took are genuine, rather than a photo of a photo.
“The issue of deepfake is one of the most serious issues we need to tackle, in terms of principles, conduct and technology,” Sony Group’s chief technology officer Hiroaki Kitano said during a panel discussion about fighting deepfakes at the Asia Tech x Singapore summit held at Capella Singapore.
Microsoft uses Content Credentials, said fellow panellist Natasha Crampton, the company’s chief responsible artificial intelligence (AI) officer.
Created by the Coalition for Content Provenance and Authenticity, it is a tool for users to sign and authenticate their content using digital watermarking credentials.
With this tool, users can scroll over a “CR” icon on a piece of content, and find information such as the creator’s information, when or where the content was created, whether the content was AI-generated and if it had been edited or tampered with subsequently.
The coalition, which includes committee members Microsoft, Sony, Google and Adobe, was established in 2021 to work on tracing the provenance of media content.
All five panellists said technology is not the silver bullet in the fight against deepfakes online. They said the fight will also require regulatory frameworks and international collaboration among governments and private sector firms.
Collaboration needs to start from the top with leaders of nations, who oversee critical infrastructure fundamental to every society that deepfake can have an impact on, said Ms Ieva Martinkenaite, senior vice-president of research and innovation at Norwegian telecommunications company Telenor Group.
At the summit on May 31, Minister for Communications and Information Josephine Teo said the collaboration between Singapore’s Infocomm Media Development Authority (IMDA) and Microsoft on content provenance and responsible AI is an example of how the Government and industry can partner up to develop practical measures.
There needs to be a clear definition of what deepfake is and is not before standards can be established to regulate it, said Mr Stefan Schnorr, State Secretary of the German Federal Ministry for Digital and Transport. Thereafter, labelling mechanisms to pick out deepfake content can be developed.
He added that discussions about such standards and mechanisms will need to be transparent and involve all stakeholders, such as academics. “Technical firms, for example, are very interested to find solutions (against deepfakes). But on the other hand, they don’t want to hinder innovation,” Mr Schnorr said.
“Therefore, it is important to work with all relevant stakeholders to promote innovation, but also uphold ethical standards and safeguard human rights.”
For countries and people across the world to learn from each other’s experience with deepfake technology, it might be necessary to establish a deepfake “observatory” of the world, said Professor Yi Zeng of the International Research Centre for AI Ethics and Governance at the Chinese Academy of Sciences and director of Brain-Inspired Cognitive Intelligence Lab.
He also suggested that countries collaborate on a global fact-checker, accessible by every internet user around the world.
Ms Martinkenaite urged internet users to continue learning, be critical and curious about everything they see.
This will increase their ability to recognise deepfake content.

