Zheng Hong Sebastian See
Zheng Hong Sebastian See LLB and BCL (Oxon) Law Graduate and BVS Student

(How and When) Should AI be Regulated?

(How and When) Should AI be Regulated?

Before answering the question of whether AI should be regulated at all, we need to dissect two issues. First, what is AI in the first place? Second, what is actually being regulated?

What is AI?

There is no universal definition of AI.

According to Pei Wang, the working definition of AI is ‘automated systems to perform tasks normally requiring human intelligence’. (Wang 2020)

Nonetheless, this definition is subject to critical examination. First, the goalpost is constantly shifting, tasks that normally require human intelligence may change from time to time. Thus, the threshold to satisfy this definition will become increasingly higher (or lower!).

Second, the word automated is subjective. For example, if the whole machine runs on its own, except for the fact that a human is there to press the final button for it to work, does this count as ‘automated’?

After all, there is no universal definition of AI.

What is actually being regulated?

There are arguably three possible theories of to who or what the regulation of AI applies.

  1. The regulation of ‘a computational technology for decision making’.
  2. The regulation of ‘a field of scientific research that studies theories and methods for adaptability, interaction, and autonomy of machines’.
  3. The regulation of ‘an intelligent entity that acts autonomously in (our) environment’.

(UMEA University, Prof Dr Virginia Dignum)

The first and third theory seems more palatable than the second theory, simply because pure research is more likely to be benign, save research on utilising AI in warfare or biochemical weapons.

Nonetheless, for the first theory, a more apt approach should be to place more emphasis on regulating AI decision-making.

In other words, what is being or going to be regulated seems to revolve around AI decision-making. This is mainly because there are two potential risks posed by AI decisions, which are

  1. the infringement of fundamental rights;
  2. the causing of loss or damage as the result of an AI decision.

How should we regulate AI to manage such risks?

To assist us in exploring how to regulate AI, there are some analytic heuristics that could assist us.

They are the eight themes among AI Principles:

  1. Privacy
  2. Accountability
  3. Safety and Security
  4. Transparency and Explainability
  5. Fairness and Non-discrimination
  6. Human Control of Technology
  7. Professional Responsibility
  8. Promotion of Human Values

(Berkman Klein Center for Internet & Society at Harvard University)

Applying these eight themes across the AI regulatory framework could potentially help us in addressing and managing the risks mentioned above.

Different Regulatory Models

Further, we ought to consider what type of regulatory model we should adopt. This article looks into three different models:

  1. Centralised
  2. Decentralised
  3. Other (in-between/hybrid)

Centralised

China is a typical example of a centralised model where the country’s government’s policy not only regulates the AI but also dictates the trajectory of AI development.

There are a few key guidelines and regulations promulgated by the Chinese government in recent years.

They include:

  1. Data Security Law and the Personal Information Protection Law 2021
  2. Internet Information Service Algorithm Recommendation Management Regulations
  3. Ethical Norms for New Generation Artificial Intelligence

The Ethical Norms are particularly of interest. It encompasses areas such as the use and protection of personal information, human control over and responsibility for AI, and the avoidance of AI-related monopolies.

Also, it puts forward six basic ethical requirements, namely:

  1. the advancement of human welfare,
  2. the promotion of fairness and justice,
  3. the protection of privacy and security,
  4. the assurance of controllability and trustworthiness,
  5. the strengthening of accountability, and
  6. improvements to the cultivation of ethics.

One key observation is that there is a significant overlapping of these ethical requirements with that of the eight overarching AI themes aforementioned. This demonstrates the practicality and utility of the eight themes.

However, rather peculiarly, the document setting out the Ethical Norms does not specify how these norms are to be enforced; nor does it mention any punishments for those who violate the norms. In practical terms, the implementation of such punishments seem challenging, especially when the regulator itself may be the goalkeeper, the rule-setter as well as the referee.

Decentralised

The US epitomised a decentralised regulatory model. There is thus far no federal level of regulation of AI in the US. This decentralised model involves a decentralisation process where the power of governing the data was delegated to the private entities in the US. These private entities include huge IT conglomerates such as Google, Apple, and Meta.

Nevertheless, this begs a real question. Is it genuinely a decentralised model? Or is it just a switch of actors where the centralisation of the data and AI regulation lies in the hands of the huge IT conglomerates? This ultimately leads to a crucial question. Under such a “decentralised” regime, is there enough regulatory oversight to ensure the accountability of the private entities? Jake Stein’s work would provide a clear and insightful picture, allowing us to critically examine the so-called ‘decentralised model’.

Other (in-between/hybrid)

The EU is a good example of this because of the sheer number of member states as well as the differing level of AI technology across member states.

There are already some regulations in place such as the General Data Protection Regulation and regulations on automated driving. That said, none of them is comparable to the proposed regulation on AI by the EU. In April 2021, the EU laid out a proposal for a Regulation attempting to set out the harmonised rules on AI among the member states. The Commission has proposed the first-ever legal framework on AI, which addresses the risks of AI and how the EU could play a leading role in this.

Among important observations under this draft regulation include the proposals that:

  1. AI providers must ‘notify the relevant national supervisory authority of any serious incidents or malfunctions that lead to a breach of fundamental rights obligations’.
  2. A list of prohibited AI, namely the systems that ‘distort human behaviour, that result in ‘social-scoring’ or those used for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement, subject to certain narrowly defined exemptions’.

(European Commission, April 2021/0106 (COD))

Importantly, this piece of proposed legislation reiterates the importance of safeguarding fundamental rights and obligations as we manoeuvre into uncharted territories of AI amidst its rapid development.

The Need to Regulate AI and the Accompanying Challenges

There is a general consensus that there is a need to regulate AI, especially when AI plays such a significant role in making decisions affecting human life:

These AI decisions have far-reaching consequences and these are shown in the following examples:

  1. Algorithms that calculate school performance in the UK
  2. AI systems that calculate the likelihood of recidivism and determine length of prison sentences of defendants in the US
  3. AI system offering credit limits to consumers (Apple)

    Apple's credit card system offered different credit limits for men and women, leading to gender bias concerns.

  4. AI system vetting CV which affects someone’s job opportunities, and social mobility (Pasquale 2020)
  5. Driving test, and different types of tests (Pasquale 2020)

All of these examples vividly showed how AI systems, notwithstanding how well-intended they may be designed, can perpetuate biases and thus entrench pre-existing inequities through their decisions. One obvious explanation is the theory of “Garbage in, Garbage out” where the AI systems are, in their inception, far from unbiased, thus merely reflecting or potentially amplifying such biases.

However, is regulation “the” and “the only” way forward to tackle these biases and inequities? Regulation is certainly not the panacea to all of these underlying institutional inequities. Nonetheless, devising a suitable approach to regulate AI seems to be an important task considering the ubiquitous deployment of AI is growing both in breadth and depth, across different sectors. It is better to do something than nothing. Also, as cliché as it may sound, prevention is better than cure.

Challenges in Regulating AI

Nevertheless, regulating AI is a challenging exercise. Three main challenges include:

  1. A context-specific regulatory model is required,
  2. New risks, and
  3. Transparency (ex-ante/ex-post) of the AI (Reed 2018).

First, it is virtually impossible to have a broad, one-size-fit-it-all regulatory model. This is because the deployment of AI in certain areas grows and develops faster than in other areas. This leads to a tension between specificity and vagueness. On one hand, having too specific laws may restrict the flexibility of the law and the law may be outdated before it was implemented or applied. On the other hand, imposing vaguely or broadly defined legislation may be contrary to natural justice as the law should be reasonably predictable so that it would not punish citizens retrospectively.

Second, because of the unpredictability of AI technology, there are new risks that could not be predicted, at least not when the regulators are drafting the law. With these unknown new risks (the unknown unknown), it is doubtful that regulators could meaningfully set out “prospective regulation” to manage and address the risks of AI.

Third, it is challenging to explain the AI decision-making process beforehand (transparency ex-ante) and afterwards (transparency ex-post). The reason is twofold.

First, it may not even be possible to have ex-ante transparency because machine learning is an iterative process, which means that it is constantly evolving. However, this conflicts with the regulatory aim, which may understandably prefer ex-ante transparency because regulators not only desire to attribute liability but also want to avoid or prevent the breach.

Second, regardless of ex-ante or ex-post transparency, the purpose of the revelation (the transparency) and what is being revealed is crucial. This is because explaining how the output data is obtained may not be completely helpful for regulatory purposes. Also, what regulators are interested to know (e.g. how to solve the problems arising from AI like a violation of fundamental rights) might not correspond to what is being revealed.

Conclusion: Should we regulate AI at all?

Considering the significant and imminent risks posed by AI decisions which have far-reaching consequences on our fundamental rights and obligations, a better question seems to be how and when should we regulate AI rather than should we regulate AI. There is a rather pessimistic or sceptical view on regulating AI that it could potentially stifle or hinder the development of AI. Nonetheless, regulatory exercise is not merely prohibitive, but also permissive, and most of the time, an amalgamation of both. The true challenge is how to achieve that balance between over-regulating and under-regulating and between specificity and flexibility in such a regulatory exercise. It is hoped that the regulatory models as well as the themes and guiding principles outlined above could, to a certain extent, provide some signposts on how to overcome this challenge.

Bibliography

  1. Dignum V, ‘(HOW) SHOULD AI BE REGULATED?’ Umea University, https://ec.europa.eu/jrc/communities/sites/default/files/22_10.30_how_should_ai_be_regulated_virginia_dignum.pdf

  2. Fjeld, Jessica, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. "Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI." Berkman Klein Center for Internet & Society, 2020.

  3. Pasquale F, New Laws of Robotics: Defending Human Expertise in the Age of AI (Harvard University Press, 2020)

  4. Proposal for a Regulation of The European Parliament and of The Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts ((European Commission, April 2021/0106 (COD))

  5. Reed C., ‘How should we regulate artificial intelligence?’ (2018) Phil. Trans. R. Soc. A 376: 20170360. http://dx.doi.org/10.1098/rsta.2017.0360

  6. Wang P, ‘On Defining Artificial Intelligence’ (2019) 10(2) Journal of Artificial General Intelligence 1