top of page

“Chat, is this Real?”: Exploring Restrictions on Digitally Manipulated Content



In this Explainer, find out...

  1. What is digitally manipulated content and how can it cause harm?

  2. How does Singapore aim to mitigate the impact of digitally manipulated content during elections?

  3. How do Singapore’s policies compare to other nations’ regulation of deepfake content?



Introduction


“Chat is this real?” is a new catchphrase coined by youths online. It references a question frequently asked by live streamers to their viewers, aka "chat," if a story or piece of content that they are reacting to is real.


This catchphrase reflects the slew of disinformation and AI generated content spreading online, making it no wonder that The Elections (Integrity of Online Advertising (Amendment) Bill (ELIONA) has emerged in Singapore’s digital security landscape. The newly introduced Bill aims to regulate digitally manipulated content, such as deepfakes which can taint the electoral process.


Doctored media depicting false representations of candidates not only deprive politicians the opportunity to accurately represent themselves, but also weaken the ability of the people to engage in healthy civic involvement.


Globally, the use of digitally manipulated content has alerted regulatory bodies to set up policies that keep the use of such tools in check. This Policy Explainer will explore the dangers behind digitally manipulated content, and the policy amendment that addresses its dangers in disrupting the election process.  Lastly, it will discuss the ethical and practical benefits of the amendment, and some potential limitations. 



Digitally Manipulated Media And Its Impact


What is Digitally Manipulated Content?


Before explaining the ELIONA and what it entails, it is best to clarify the scope of digitally manipulated content discussed in this article. Digitally manipulated content is defined as content generated or manipulated using AI techniques such as generative AI. It also includes manipulative depiction of candidates using non-AI techniques such as image editing, dubbing, and splicing.


Impact of Digitally Manipulated Content


Impact on Singapore

Significant political figures in Singapore have been targeted with the spread of digitally manipulated content, making the public vulnerable to disinformation on political candidates. 


In 2024, several MPs received extortionate letters containing digitally altered photos depicting their faces on obscene images. Former Prime Minister Lee Hsien Loong was also victim of a malicious deepfake video, where an interview was altered to show him making untrue statements about US-China relations. The rise of deepfakes has made it easier than ever for anyone to create highly damaging defamatory material which, left unchecked, can destroy livelihoods and affect public trust in politicians.


Impact on the World

Such incidents are not isolated to Singapore, as the use of digitally manipulated content to influence voter behaviour has been observed globally. The alarming scale at which this phenomenon is taking place requires regulatory action to uphold the transparency of political processes. 


One example of this is when the former U.S. President Joe Biden’s voice was used in robocalls that discouraged Democrats from participating in the New Hampshire Primary election. As a result such robocalls have since been made illegal, having proven to be able to confuse citizens with disinformation.


The Impact of Regulatory Reactions on Democracy 

Hence, such stricter regulations allow crackdowns on the use of technology to falsely represent candidates. Through the introduction of new laws which limit the spread of and access to these materials, a candidate’s right to a fair representation of themselves can be upheld. At the same time, voters can be assured that their assessments of candidates are based on a truthful understanding of a candidate’s campaign and character. 


For example, South Korea revised its Public Official Election Act to ban political campaign videos that use AI-generated content 90 days prior to an election. Violations of the revised law, can lead to jail time of up to seven years, or a fine of up to 50 million won (almost S$50,000). The National Election Commission busted a total of 129 deepfakes that were deemed to violate the laws during elections of public officials between January 29 and February 16.


Only when we safeguard the values of fairness, truth and transparency in the democratic process, can politicians and voters worldwide have the confidence to take a stand on the relevant improvements they wish to see in their societies.



The Elections (Integrity Of Online Advertising) (Amendment) Bill (ELIONA)


WHY: Main Objective of the Bill


The Elections (Integrity of Online Advertising) (Amendment) Bill (ELIONA) is a safeguard that aims to  protect Singaporeans and politicians alike from digitally manipulated content during elections. 


WHAT: Types of Digitally Manipulated Content Covered in the Bill


The Bill prohibits false representations of candidates saying or doing things they did not, which are created using digital tools. 


It covers both artificial intelligence (AI)-generated misinformation (such as deepfakes) and non-AI digital tools (such as realistic manipulated images or videos taken out of context and misrepresenting a candidate’s actions).


HOW: Detailed Mechanisms and Significance 


Fundamentally, the amendment criminalises manipulated online election advertising. This is done through introducing a ban on online election advertising containing realistic but false representations of a candidate saying or doing something that they did not say or do.


This ban is operationalised through the introduction of clauses which empower candidates and the Returning Officer to act against malicious actors. 


Candidates

Candidates are empowered to seek assistance against misrepresentations of themselves through the amendment.


When misrepresented by such content, they can request the Returning Officer (RO), a public officer appointed to oversee the impartial and smooth conduct of elections, to review and address the material. 


In the process, candidates must fill a declaration form via the Candidate’s Portal on Elections Department Singapore website, providing clear evidence as to why the reported content should be taken down according to the amendment. 


To prevent abuse of the law, it is illegal for candidates to knowingly make a false or misleading declaration. Punishments include a fine not exceeding S$2,000 and an ineligibility to be elected as a Member of Parliament or the President. If already elected as a Member of Parliament or President, the candidate's position may also be invalidated.


Returning Officer (RO)

The RO is given the power to enforce corrective orders to several stakeholders involved in the spread of the disinformation.


Individuals, social media platforms and internet service providers (ISP) can be forced to take down or disable access to misleading content during the election period. All stakeholders may face penalties for not complying to a corrective order. 


Given the extensive reach and responsibility that social media services must uphold, a fine of up to S$1 million can be raised for ISPs who fail to comply. 


For all others, including individuals, disobeying a corrective direction remains at a fine not exceeding S$1,000, or to imprisonment for a term not exceeding 12 months or to both.


WHY SO: Reviewing Global Responses to Digitally Manipulated Content in Elections 


Around the world, countries have been introducing new safeguards against digital disinformation aimed to disrupt elections. This signals that Singapore is not alone in seeing the need for stricter regulations.


In 2023, Slovakian electoral candidate Michal Šimečka was one of the victims involved in a deepfake audio clip  which painted him to be discussing how to rig the elections with a prominent journalist. While he spoke out against the fabricated material,  this did not stop the clip from going viral right before the election. Michal’s rival, Robert Fico, ended up winning the election, potentially due to the impact of the audio clip. 


The European Union published the AI Act in response to incidents like the Slovak case. The Act classifies AI systems based on their four different risk levels and imposes corresponding obligations:

  • Unacceptable Risk: AI systems deemed to pose significant threats to individuals' rights or safety are prohibited. This includes applications like social scoring and manipulative AI practices. ​

  • High Risk: AI applications used in critical sectors such as healthcare, law enforcement, and transportation are subject to strict requirements, including risk assessments, data governance, and human oversight. ​

  • Limited Risk: These AI systems must meet specific transparency obligations, such as informing users when they are interacting with AI. ​

  • Minimal Risk: AI systems with minimal risk (e.g AI-enabled video games and spam filters are largely unregulated under the Act.



Benefits And Limitations Of The Amendment


Benefits 


Principle Benefits 


First and foremost, the amendment criminalises malicious acts meant to undermine trust between stakeholders in the political process. By making the act punishable by Singapore law, a clear moral precedent is set against online disinformation that disrupts elections. This sends broader signals of a strong belief in protecting truthful communication between candidates and voters, safeguarding the trust needed for a democratic society to function.  


Second, power imbalances between candidates who are victims of manipulated content and creators of such content are addressed. Without the amendment, it is difficult for the candidate to act against disinformation. According to the annual Online Safety Poll conducted by the Ministry of Digital Development and Information, among those who made reports on harmful online content, about 80 per cent experienced issues with reporting. The platform did not take down the content or disable the account responsible; did not provide an update on the outcome of the report; or allowed the removed content to be reposted. 


Conversely, the widespread accessibility of online media editing tools give malicious actors great power to target candidates and disrupt the election process. To illustrate, the 2025 Identity Fraud Report by Entrust reported that a deepfake attempt occurred every five minutes.


With the amendment, candidates are empowered to act against disinformation through the RO. As an impartial actor, the RO can assess the authenticity of the reported content and issue corrective orders to the offender if necessary. Compared to the poor track record of the reporting process on various online platforms, this process ensures the candidate’s report can be addressed with certainty.


Lastly, there is a rightful increase of responsibility being placed on social media and internet service providers (ISP). Social media and Internet service providers have the unique ability to affect user access to content on the online space. For example, if a creator’s content goes against the Facebook Community Standards or Instagram Community Guidelines, Meta has the power to remove it. With the amendment, social media and ISPs are now liable to heavier legal punishments when failing to obey corrective orders, reflecting the degree of responsibility they hold as intermediary regulators in the online sphere.


Practical Benefits 

Other than moral benefits, the amendment brings practical benefits that come in the involvement of an impartial actor in the form of the RO.  Rather than having an offence be resolved by judges through the form of a lawsuit, the addition of an RO allows for a more effective and efficient resolution of the issue, while also reducing the strain on the judicial system.  


Limitations 


Despite its many benefits, it is important to consider the potential limitations of the amendment. As digital media editing and generative technology continue to advance, disinformation can appear more realistic, making it difficult for the RO to verify its authenticity. The extent of this limitation’s significance depends on the future of authenticity verification technology being employed in Singapore. Currently, commercial and in-house tools developed with researchers such as those at the Centre of Advanced Technologies in Online Safety (CATOS) are being used in manipulation detection processes. 


Strong investment in research, such as S$50 million in funding over five years to CATOS, can help to develop new technological capabilities to detect online harms, including harmful digitally manipulated content. However, high funding may not necessarily translate into successful or implementable products. Hence, enforcement gaps may continue to exist if verification technology fails to keep up with the advancement of digitally manipulated content used in criminal activity. 


Additionally, the RO can face difficulty in the enforcement of the policy due to a lack of clarity around certain terms in the amendment. As mentioned by Member of Parliament Yip Hon Weng during the bill’s debate, the terms “realistic but false representations” and “manipulated content” can be too broad in scope. This makes it possible for the RO to overstep their bounds in dealing with offenders, which can then lead to accusations of abuse of power.



Conclusion


Through the ELIONA Bill, a significant effort has been made towards eliminating the threat of digitally altered media during elections. However, considering the rapid rate of development of digitally manipulated content and the technologies used to create them, Singapore needs to be ready to set up more regulatory frameworks to not only deal with new digitally manipulated content pertaining to candidates, but also to deal with the harm they may bring to the wider population.



This Policy Explainer was written by members of MAJU. MAJU is a ground-up, fully youth-led organisation dedicated to empowering Singaporean youths in policy discourse and co-creation.


By promoting constructive dialogue and serving as a bridge between youths and the Government, we hope to drive the keMAJUan (progress!) of Singapore.


The citations to our Policy Explainers can be found in the PDF appended to this webpage.


Comments


MAJU: The Youth Policy Research Initiative

By youths, for youths, for Singapore.

  • LinkedIn
  • Telegram
  • Instagram
bottom of page