SINGAPORE — A new Bill will put in place measures to counter digitally manipulated content during elections, including misinformation generated using artificial intelligence (AI) — commonly known as deepfakes.
The proposed safeguards under the Elections (Integrity of Online Advertising) (Amendment) Bill will apply to all online content that realistically depicts a candidate saying or doing something that he or she did not. This includes content made using non-AI techniques like Photoshop, dubbing, and splicing.
If the Bill is passed, candidates will be able to ask the Returning Officer (RO) to review content that has misrepresented them. A false declaration of such misrepresentation is illegal and could result in a fine or loss of a seat.
Others can also make requests to review such content, which is set to be made illegal from the issuance of the Writ of Election to the close of polling on Polling Day. The move comes ahead of a general election that must be held by November 2025.
The RO can issue corrective directions to those who publish prohibited online election advertising content under the proposed new law. Social media services that fail to comply may be fined up to $1 million upon conviction, while all others may be fined up to $1,000, jailed up to a year, or both.
Corrective actions include taking down the offending content, or disabling access by Singapore users to such content during the election period.
Minister of State for Digital Development and Information Rahayu Mahzam tabled the Bill in Parliament on Sept 9. If passed, it will amend the Parliamentary Elections Act and the Presidential Elections Act to introduce the new safeguards.
To be protected under it, prospective candidates will first have to pay their election deposits and consent to their names being published on a list that will be put up on the Elections Department's website, some time before Nomination Day. If they choose to do so, it will be the first time that the identities of prospective candidates are made public before Nomination Day.
The measures will also cover successfully-nominated candidates from the end of Nomination Day to Polling Day.
The Bill will be tabled for a second reading at the next available Parliament sitting.
The Ministry of Digital Development and Information (MDDI) said in a press release that while the Government can already deal with individual online falsehoods against the public interest through the Protection from Online Falsehoods and Manipulation Act (Pofma), targeted levers are needed to act on deepfakes that misrepresent candidates during elections.
[[nid:695487]]
"Misinformation created by AI-generated content and deepfakes are a salient threat to our electoral integrity," said a MDDI spokesperson. "We see this new Bill not as a replacement for POFMA, but rather as a means to augment and sharpen our regulations under the online election advertising regime, so as to shore up the integrity of our electoral process."
The spokesperson added that for Pofma, the Government will respond when it knows what the fact is. For example, when someone makes a falsehood about the reserves or housing prices, it knows what the facts are.
"However, in the case of deepfakes featuring political candidates, it is much more difficult for the Government to establish what an individual said or did not say, did or did not do. Therefore, we do need the individual to come forward and say that this is a misrepresentation."
The spokesperson added: "While we can use a set of technological tools to assess whether the content is AI-generated or manipulated, these tools give us a certain confidence level, but it is not 100 per cent. So, there is quite a lot of weight given to what an individual claims is the truth, and this is where it differs from Pofma."
Fraudsters have disrupted elections in many countries, including in Slovakia and India. More recently, fake videos of presidential nominees Kamala Harris and Donald Trump have proliferated on social media in what is widely billed as America's first AI election in November.
In response, there has been a growing momentum worldwide to deal with deepfakes during elections. For example, South Korea implemented a 90-day ban on political AI-generated content before its election in April. Its National Election Commission said it busted a total of 129 deepfakes that were deemed to violate the laws on elections of public officials between Jan 29 and Feb 16.
Brazil has also banned synthetic content that will harm or favour a candidacy during elections in February.
Closer to home, then-Prime Minister Lee Hsien Loong warned the public of deepfake videos circulating online in December 2023 which showed him and then-Deputy Prime Minister Lawrence Wong promoting an investment platform. The videos used AI to mimic their voices and facial expressions.
Mrs Teo told Parliament in January that Singapore needs to grow new capabilities to keep pace with scammers and online risks. She announced a new arsenal of detection tools Singapore is developing to tackle the rising scourge of deepfakes and misinformation. The tools will be designed under a new $50 million initiative to build online trust and safety.
Beyond the elections, a new code of practice will be introduced to tackle deepfakes and other forms of manipulated content.
The Infocomm Media Development Authority (IMDA) will introduce the code requiring social media services to put in place measures to address digitally manipulated content.
This will ensure that they do more to gatekeep, safeguard and moderate content on their platforms.
IMDA will engage social media services in the coming months to work out the details of the code.
[[nid:687792]]
This article was first published in The Straits Times. Permission required for reproduction.