(How and When) Should AI be Regulated?
Before answering the question of whether AI should be regulated at all, we need to dissect two issues. First, what is AI in the first place? Second, what is actually being regulated?
What is AI?
There is no universal definition of AI.
According to Pei Wang, the working definition of AI is ‘automated systems to perform tasks normally requiring human intelligence’. (Wang 2020)
Nonetheless, this definition is subject to critical examination. First, the goalpost is constantly shifting, tasks that normally require human intelligence may change from time to time. Thus, the threshold to satisfy this definition will become increasingly higher (or lower!).
Second, the word automated is subjective. For example, if the whole machine runs on its own, except for the fact that a human is there to press the final button for it to work, does this count as ‘automated’?
After all, there is no universal definition of AI.
What is actually being regulated?
There are arguably three possible theories of to who or what the regulation of AI applies.
- The regulation of ‘a computational technology for decision making’.
- The regulation of ‘a field of scientific research that studies theories and methods for adaptability, interaction, and autonomy of machines’.
- The regulation of ‘an intelligent entity that acts autonomously in (our) environment’.
(UMEA University, Prof Dr Virginia Dignum)
The first and third theory seems more palatable than the second theory, simply because pure research is more likely to be benign, save research on utilising AI in warfare or biochemical weapons.
Nonetheless, for the first theory, a more apt approach should be to place more emphasis on regulating AI decision-making.
In other words, what is being or going to be regulated seems to revolve around AI decision-making. This is mainly because there are two potential risks posed by AI decisions, which are
- the infringement of fundamental rights;
- the causing of loss or damage as the result of an AI decision.
How should we regulate AI to manage such risks?
To assist us in exploring how to regulate AI, there are some analytic heuristics that could assist us.
They are the eight themes among AI Principles:
- Safety and Security
- Transparency and Explainability
- Fairness and Non-discrimination
- Human Control of Technology
- Professional Responsibility
- Promotion of Human Values
(Berkman Klein Center for Internet & Society at Harvard University)
Applying these eight themes across the AI regulatory framework could potentially help us in addressing and managing the risks mentioned above.
Different Regulatory Models
Further, we ought to consider what type of regulatory model we should adopt. This article looks into three different models:
- Other (in-between/hybrid)
China is a typical example of a centralised model where the country’s government’s policy not only regulates the AI but also dictates the trajectory of AI development.
There are a few key guidelines and regulations promulgated by the Chinese government in recent years.
- Data Security Law and the Personal Information Protection Law 2021
- Internet Information Service Algorithm Recommendation Management Regulations
- Ethical Norms for New Generation Artificial Intelligence
The Ethical Norms are particularly of interest. It encompasses areas such as the use and protection of personal information, human control over and responsibility for AI, and the avoidance of AI-related monopolies.
Also, it puts forward six basic ethical requirements, namely:
- the advancement of human welfare,
- the promotion of fairness and justice,
- the protection of privacy and security,
- the assurance of controllability and trustworthiness,
- the strengthening of accountability, and
- improvements to the cultivation of ethics.
One key observation is that there is a significant overlapping of these ethical requirements with that of the eight overarching AI themes aforementioned. This demonstrates the practicality and utility of the eight themes.
However, rather peculiarly, the document setting out the Ethical Norms does not specify how these norms are to be enforced; nor does it mention any punishments for those who violate the norms. In practical terms, the implementation of such punishments seem challenging, especially when the regulator itself may be the goalkeeper, the rule-setter as well as the referee.
The US epitomised a decentralised regulatory model. There is thus far no federal level of regulation of AI in the US. This decentralised model involves a decentralisation process where the power of governing the data was delegated to the private entities in the US. These private entities include huge IT conglomerates such as Google, Apple, and Meta.
Nevertheless, this begs a real question. Is it genuinely a decentralised model? Or is it just a switch of actors where the centralisation of the data and AI regulation lies in the hands of the huge IT conglomerates? This ultimately leads to a crucial question. Under such a “decentralised” regime, is there enough regulatory oversight to ensure the accountability of the private entities? Jake Stein’s work would provide a clear and insightful picture, allowing us to critically examine the so-called ‘decentralised model’.
The EU is a good example of this because of the sheer number of member states as well as the differing level of AI technology across member states.
There are already some regulations in place such as the General Data Protection Regulation and regulations on automated driving. That said, none of them is comparable to the proposed regulation on AI by the EU. In April 2021, the EU laid out a proposal for a Regulation attempting to set out the harmonised rules on AI among the member states. The Commission has proposed the first-ever legal framework on AI, which addresses the risks of AI and how the EU could play a leading role in this.
Among important observations under this draft regulation include the proposals that:
- AI providers must ‘notify the relevant national supervisory authority of any serious incidents or malfunctions that lead to a breach of fundamental rights obligations’.
- A list of prohibited AI, namely the systems that ‘distort human behaviour, that result in ‘social-scoring’ or those used for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement, subject to certain narrowly defined exemptions’.
(European Commission, April 2021/0106 (COD))
Importantly, this piece of proposed legislation reiterates the importance of safeguarding fundamental rights and obligations as we manoeuvre into uncharted territories of AI amidst its rapid development.
The Need to Regulate AI and the Accompanying Challenges
There is a general consensus that there is a need to regulate AI, especially when AI plays such a significant role in making decisions affecting human life:
These AI decisions have far-reaching consequences and these are shown in the following examples:
- Algorithms that calculate school performance in the UK
- AI systems that calculate the likelihood of recidivism and determine length of prison sentences of defendants in the US
AI system offering credit limits to consumers (Apple)
Apple's credit card system offered different credit limits for men and women, leading to gender bias concerns.
- AI system vetting CV which affects someone’s job opportunities, and social mobility (Pasquale 2020)
- Driving test, and different types of tests (Pasquale 2020)
All of these examples vividly showed how AI systems, notwithstanding how well-intended they may be designed, can perpetuate biases and thus entrench pre-existing inequities through their decisions. One obvious explanation is the theory of “Garbage in, Garbage out” where the AI systems are, in their inception, far from unbiased, thus merely reflecting or potentially amplifying such biases.
However, is regulation “the” and “the only” way forward to tackle these biases and inequities? Regulation is certainly not the panacea to all of these underlying institutional inequities. Nonetheless, devising a suitable approach to regulate AI seems to be an important task considering the ubiquitous deployment of AI is growing both in breadth and depth, across different sectors. It is better to do something than nothing. Also, as cliché as it may sound, prevention is better than cure.
Challenges in Regulating AI
Nevertheless, regulating AI is a challenging exercise. Three main challenges include:
- A context-specific regulatory model is required,
- New risks, and
- Transparency (ex-ante/ex-post) of the AI (Reed 2018).
First, it is virtually impossible to have a broad, one-size-fit-it-all regulatory model. This is because the deployment of AI in certain areas grows and develops faster than in other areas. This leads to a tension between specificity and vagueness. On one hand, having too specific laws may restrict the flexibility of the law and the law may be outdated before it was implemented or applied. On the other hand, imposing vaguely or broadly defined legislation may be contrary to natural justice as the law should be reasonably predictable so that it would not punish citizens retrospectively.
Second, because of the unpredictability of AI technology, there are new risks that could not be predicted, at least not when the regulators are drafting the law. With these unknown new risks (the unknown unknown), it is doubtful that regulators could meaningfully set out “prospective regulation” to manage and address the risks of AI.
Third, it is challenging to explain the AI decision-making process beforehand (transparency ex-ante) and afterwards (transparency ex-post). The reason is twofold.
First, it may not even be possible to have ex-ante transparency because machine learning is an iterative process, which means that it is constantly evolving. However, this conflicts with the regulatory aim, which may understandably prefer ex-ante transparency because regulators not only desire to attribute liability but also want to avoid or prevent the breach.
Second, regardless of ex-ante or ex-post transparency, the purpose of the revelation (the transparency) and what is being revealed is crucial. This is because explaining how the output data is obtained may not be completely helpful for regulatory purposes. Also, what regulators are interested to know (e.g. how to solve the problems arising from AI like a violation of fundamental rights) might not correspond to what is being revealed.
Conclusion: Should we regulate AI at all?
Considering the significant and imminent risks posed by AI decisions which have far-reaching consequences on our fundamental rights and obligations, a better question seems to be how and when should we regulate AI rather than should we regulate AI. There is a rather pessimistic or sceptical view on regulating AI that it could potentially stifle or hinder the development of AI. Nonetheless, regulatory exercise is not merely prohibitive, but also permissive, and most of the time, an amalgamation of both. The true challenge is how to achieve that balance between over-regulating and under-regulating and between specificity and flexibility in such a regulatory exercise. It is hoped that the regulatory models as well as the themes and guiding principles outlined above could, to a certain extent, provide some signposts on how to overcome this challenge.
Dignum V, ‘(HOW) SHOULD AI BE REGULATED?’ Umea University, https://ec.europa.eu/jrc/communities/sites/default/files/22_10.30_how_should_ai_be_regulated_virginia_dignum.pdf
Fjeld, Jessica, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. "Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI." Berkman Klein Center for Internet & Society, 2020.
Pasquale F, New Laws of Robotics: Defending Human Expertise in the Age of AI (Harvard University Press, 2020)
Proposal for a Regulation of The European Parliament and of The Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts ((European Commission, April 2021/0106 (COD))
Reed C., ‘How should we regulate artificial intelligence?’ (2018) Phil. Trans. R. Soc. A 376: 20170360. http://dx.doi.org/10.1098/rsta.2017.0360
Wang P, ‘On Defining Artificial Intelligence’ (2019) 10(2) Journal of Artificial General Intelligence 1