top of page

Big Data is Watching You: Surveying Singapore’s Evolving Data Protection Act (Part 2)

Updated: Sep 24


In this Explainer, find out…

  1. How does the amended Personal Data Protection Act compare with the European Union’s data protection legislation?

  2. What are the implications of artificial intelligence technologies on Singapore’s personal data protection approach?

  3. What steps have Singapore taken to address these implications?


Introduction


In the first part of this Policy Explainer, we explored the evolution of the data landscape, the enactment of the Personal Data Protection Act 2012 (PDPA) and its subsequent amendments. In this piece, we will broaden our perspective, comparing Singapore’s PDPA with the European Union’s (EU’s) General Data Protection Regulation (GDPR). We will then extend this comparison by looking into the regulation of personal data in artificial intelligence (AI) governance across Singapore and the EU.


Comparing Singapore and the EU’s Approaches


The PDPA is just one approach to balancing individuals’ need for data privacy and organisations’ interests in harnessing personal data. To better understand how Singapore arrived at this equilibrium, it is worth looking beyond Singapore’s borders. Among international frameworks, the EU’s GDPR stands out as an especially rigorous and influential piece of data protection legislation. Crucially, it is one that Singapore considered in drafting the amendments to the PDPA. Comparing the two jurisdictions’ approaches may thus allow a better understanding of what Singapore’s approach does (and does not) entail.


The European Parliament and the Council of the EU adopted the GDPR in 2016, though it only came into force in 2018. The comparison between the GDPR and the PDPA will be made in terms of their:


  • Underlying philosophies;

  • Approach to individual rights and consent regimes; and

  • Enforcement mechanisms.


Underlying Philosophy


At its core, the GDPR upholds individual rights to privacy and personal data protection. This echoes the principle from the Charter of Fundamental Rights of the EU, which enshrines the dignity and autonomy of individuals within the EU. In this sense, the GDPR treats personal data as intrinsically linked to individual identity, granting persons meaningful and enforceable control over how their data is processed.


In contrast, the PDPA places less emphasis on individual rights, opting for a more business-oriented approach. It is designed to facilitate legitimate and innovative uses of personal data, reflecting the Government’s aim to balance personal privacy concerns with economic interests. This divergence between the two regulations’ fundamental philosophies, in turn, translates into considerable differences between their provisions.


Approach to Individual Rights and Consent


One such difference is that the GDPR grants more extensive protection of individual rights as compared to the PDPA. While both laws ensure the right to access and rectify one’s personal data, on top of the right to withdraw consent, the GDPR additionally protects the right to object and the right to erasure.


Under the GDPR, an individual has the right to object to the processing of their personal data for direct marketing purposes. The individual can also require organisations to stop processing their data even if this processing is necessary for public tasks or legitimate interests (unless there are overriding valid grounds). This is unlike the PDPA, which recognises public and legitimate interests as legal bases for processing personal data, regardless of an individual’s objection.


Next, the GDPR provides individuals with the right to have their personal data erased under certain circumstances. For example, this right applies when:


  • The personal data is no longer necessary for the original purpose it was collected for;

  • The individual withdraws consent and there is no legal ground for processing the data; or

  • The individual objects to the processing.


On the other hand, the PDPA does not recognise the right to erasure. Although the PDPA requires organisations to stop processing personal data when the individual withdraws consent (and there is no other legal basis), it does not oblige organisations to erase this data upon request. Instead, organisations only have to erase personal data if it is no longer necessary for the purpose it was collected for, and any legal or business purposes. The Personal Data Protection Commission (PDPC) can also direct organisations to erase unlawfully collected data. As such, the Government has maintained that these provisions—while not identical to the right to erasure—create a “substantively similar effect” by ensuring that organisations stop retaining personal data when retention is no longer justified.


Further, while both laws recognise consent as a legitimate basis for processing personal data, the GDPR applies a higher standard of consent compared to the PDPA. The GDPR requires that consent be freely given, specific, informed and unambiguous. In particular, the individual has to give this consent through a statement or clear affirmative act, and consent cannot be implied. This directly contrasts with the PDPA, which allows for consent to be implicitly deemed by the individual’s conduct, contractual necessity or notification without objection. Overall, these differences underscore the GDPR’s comprehensive, rights-centric approach to personal data protection, as opposed to the PDPA’s more flexible, business-oriented regulatory system.



Accountability and Enforcement


Lastly, the difference in underlying philosophies manifests in the two laws’ respective enforcement mechanisms. Compared to the PDPA, the GDPR imposes a higher limit for monetary penalties for offending organisations. For severe violations, the maximum fine is €20 million or four per cent of the organisation’s global annual turnover, whichever is higher. In practice, EU authorities have also been willing to mete out hefty fines, signalling a punitive approach which emphasises accountability and deterrence. For instance, TikTok was recently fined €530 million by an EU regulator over its allegedly inadequate data protection practices.


In contrast, the PDPC has issued more modest fines, giving substantial weight to mitigating factors like the organisation’s cooperation throughout the investigation. For example, local firm Commeasure was fined S$74,000 for the leak of nearly 5.9 million customers’ personal data—a penalty far below the cap. Moreover, the PDPC’s Advisory Guidelines stress enforcement tools like voluntary undertakings and alternative dispute resolutions, generally regarding investigations and directions as measures of last resort. Together, these practices point to the PDPC’s more consultative approach to enforcement, focusing on compliance assistance and self-correction.


Not only does the GDPR impose heavier penalties, but its scope of enforcement is also broader than the PDPA’s. In general, the GDPR applies to:


  • Any organisation in the EU; and

  • Any organisation that processes the personal data of EU residents and monitors or supplies products to them, regardless of location.


This gives the GDPR an extraterritorial enforcement effect as non-EU organisations that target EU residents have to comply with its regulations. Conversely, the PDPA has a relatively limited scope. Its regulations specifically apply to private organisations that process personal data in Singapore (public agencies are instead subject to a different set of laws). Therefore, these differences indicate that the GDPR takes a stricter enforcement stance, reflecting a stronger emphasis on the rights to privacy and personal data protection, and a lower tolerance for breaches of these rights.


On the whole, the GDPR and the PDPA share a key objective in ensuring the responsible handling of personal data. However, they differ in the extent to which they prioritise safeguarding personal privacy over enabling data-driven innovation. The PDPA’s priorities lean towards the latter: its main goal is to remain “fit for purpose” for a complex digital economy, so as to ensure that Singapore is poised to ride the waves of technological change.


Implications of AI on Personal Data Protection


That said, since the most recent PDPA amendments were passed in 2020, new winds of change have already begun to blow. AI models are now increasingly capable of performing human cognitive tasks and facilitating job functions, which could significantly boost business productivity and economic growth. Eager to leverage this technology, many organisations are on the hunt for vast amounts of data—including personal data—to train and improve their AI models. This presents regulators with new challenges in ensuring the protection of privacy and personal data. Three key risks are discussed below:


  • Risks of large-scale personal data use;

  • Lack of transparency in data use; and

  • Bias in automated decision-making.


Risks of Large-scale Personal Data Use


First, the use of massive amounts of personal data to train AI systems creates risks of data breach. AI models can leak training data containing personal information, leading to privacy violations. Even if organisations anonymise training data, re-identification risks exist when AI models can cross-reference multiple large datasets. Furthermore, the incentive to use ever greater amounts of training data to improve AI models may cause tension with data protection principles which require organisations to only process personal data that is necessary. Regulators thus face the challenge of ensuring that AI development harnesses personal data in a responsible and secure manner.


To complement the PDPA, the PDPC released Advisory Guidelines on the use of personal data in AI systems. These guidelines recommend the practice of data minimisation, advising organisations to only collect personal data that is deemed necessary for improving an AI system. The anonymisation and de-identification of personal data is also encouraged, paired with an outline for considerations in assessing the risk of re-identification. To illustrate, one such consideration is whether a motivated individual can likely find means to re-identify the anonymised dataset using organisational or public information.


The EU AI Act shares a similar concern over the use of data in developing AI models, imposing stricter requirements on datasets for “high-risk” AI systems that could significantly affect the fundamental rights of EU citizens. Such systems are extensive, ranging from AI application in robot-assisted surgery to AI solutions for court rulings. The Act also necessitates the assessment of the availability, quantity and suitability of datasets used in high-risk AI systems. In the case of personal data, the original purpose of the data collection has to be considered.


Lack of Transparency in Personal Data Use


A second challenge is the lack of transparency in data use in AI systems. To obtain informed consent, data protection laws often require that organisations notify individuals of how and why their personal data will be used. But with AI, the same dataset can be repurposed for multiple unforeseen applications. Also, AI systems can make sensitive inferences about individuals from a seemingly unrelated dataset (e.g., a woman’s shopping history may be used to predict her pregnancy status). Since these inferences can be reused in entirely different contexts, individuals may not be able to foresee how exactly their data will be used. To exacerbate the issue, many modern AI systems operate in highly complex and opaque ways such that even developers struggle to understand how an AI model uses data to reach a particular outcome. Organisations may thus find it difficult to obtain meaningful consent for personal data use in AI models.


Hence, the PDPC’s Advisory Guidelines provide further details on the implementation of the Consent and Notification Obligations under the PDPA. The guidelines call on organisations to empathise with consumers and enable them to provide meaningful consent on the processing of personal data for AI systems. Organisations are advised to avoid excessive technical language and detail, so that consumers can easily understand how their personal data would be used for an identified purpose. Similarly, the GDPR requires explicit consent for the use of personal data in AI models, underscoring the recognition of the right to awareness of the processing of personal data by regulatory bodies worldwide.


Bias and Automated Decision-making


Third, the use of personal data in AI systems involves risks associated with bias and automated decision-making. If training data reflects historical or societal biases, AI models may replicate these biases in their outputs. AI-enabled recruitment provides a case in point, where algorithmic bias can result in discriminatory hiring practices. Further, when organisations fully automate consequential decisions based on one’s personal data, it becomes hard for individuals to seek human review and challenge potentially unfair, AI-driven outcomes. In this way, the use of personal data in AI-enabled decision-making raises pressing concerns about fairness and autonomy.


To ensure that protected characteristics (like race and religion) are well represented in datasets for AI systems, the PDPC has recognised the need to use personal data to capture a wider range of demographics. As such, developers can forgo consent under the Business Improvement Exception so long as the use of personal data for this purpose is relevant for the effectiveness or improved quality of the AI system. Developers are also recommended to consider industry practices and viable alternatives to de-bias AI models without using personal data. Similarly, the EU AI Act has recognised the exceptional need for organisations to process personal data for bias detection in high-risk AI systems—subject to several conditions. These include the condition that bias detection and correction cannot be fulfilled by processing other forms of data (such as synthetic or anonymised data).


Conclusion


The way the world uses data has been evolving rapidly since the PDPA was first introduced in 2012. It continued to change, with cybersecurity breaches of an unprecedented scale. Worrisome cases like the stealing of 1.5 million SingHealth patients’ records in 2018 undoubtedly heightened wariness to the protection of personal data.


Yet, the thread of technological advancement will continue to unravel and entangle how we utilise and manage data. By 2023, the generative AI market stood at nearly US$45 billion, representing a sevenfold increase from 2020—when the PDPA was last amended. Such disruptive technology can transform the scale at which data is collected and exchanged, necessitating further reviews on regulations globally.


In the ever-evolving data landscape, only constant reconstruction of our regulatory frameworks can prevent an Orwellian world from unfolding before our eyes. Staying vigilant is perhaps the only recourse from Big Data’s watchful eyes.


This Policy Explainer was written by members of MAJU. MAJU is a ground-up, fully youth-led organisation dedicated to empowering Singaporean youths in policy discourse and co-creation.


By promoting constructive dialogue and serving as a bridge between youths and the Government, we hope to drive the keMAJUan (progress!) of Singapore.


The citations to our Policy Explainers can be found in the PDF appended to this webpage.


Comments


MAJU: The Youth Policy Research Initiative

By youths, for youths, for Singapore.

  • LinkedIn
  • Telegram
  • Instagram
bottom of page