Tess Buckley
Tess Buckley MA Philosophy and Artificial Intelligence, NCH at Northeastern University

Philosophy and AI Regulation: Rule of Law and Justice

Philosophy and AI Regulation: Rule of Law and Justice

Writing law that governs and accounts for the actions of AI necessitates critical interrogation of exactly what law is. What are the criteria for legal validity? What is justice? What separates right from wrong? What is the relationship between law and morality? As we work towards potential policy and regulatory frameworks, it is important to examine the nature of law and its relationship to other social and cultural norms.

What is Justice?

There is no universal concept of justice. Even if there was, justice and injustice are not synonymous with right and wrong, and these are not necessarily reflected in the law. Many definitions of justice exist in Western philosophical texts, such as:

  • Polemarchus: to give to each what is owed to him (Plato, 1943, 332d)

  • Thrasymachus: the advantage of the stronger (Plato, 1943, 338c2–3)

  • Glaucon: what the social contract prescribes (Plato, 1943, 358e–359b)

  • Socrates/Plato: the parts each doing their own (Plato, 1943, 442d)

  • Aristotle: equality in several senses (Aristotle, 2009, 1129b)

  • Hume: conformity to one’s society’s rules of property (Hume, 1983)

  • Kant: performing one’s perfect duties (Kant, 2006)

As discussed by Max Van Kleek at the Colloquium, justice encompasses a broader ideal of civic friendship or, more specifically, respectful behavior. He takes respect to be a foundational concept in framing our understanding of the norms, conventions, regulations and law that we might wish to see emerge in an age of AI (Seymour, 2022).

Who is Held Responsible?

If AI has the capacity to be disrespectful, who is held accountable for the insult? It is hard to define who is responsible for the actions of AI; if an autonomous agent is to act against law, who is held accountable? The challenges AI poses to questions of responsibility c can be understood through the multiplayer model inherent in the development of emerging technologies. This model describes how multiple participants and stakeholders both overlap and are independent in the process of developing an AI. It is clear that traditional efforts of justice for identifying a perpetrator are no longer applicable in the entangled, multi-layered development of AI systems (and the insults potentially actioned by them). There are at least ten entities among many possible stakeholders who are partially, indirectly or temporarily involved in the invention process. These include, but are not limited to: software programmers; data suppliers; trainers/feedback suppliers; owners of the AI systems; operators of the systems; new employers of other players; the public; the government; the investor and finally; the AI system itself (Yanisky-Ravid & Velez-Hernandez, 2018). If any of these ten players can claim ownership over the invention of the AI then the problem of identifying the actual inventor, the entity/player responsible for the actions of the autonomous agent, must be addressed. Many of the players may have a contractual obligation to assign the invention to the company, but who is truly responsible?

Do Algorithms Owe us Justice?

Some claim that justice should be extended to non-human entities (Nussabaum, 2004), while others draw the line of justice between humans and non-humans (Rawls, 1971). At the Colloquium, Zheng Hong See (2022) questioned, ‘Should we regulate AI at all? And what of accountability in such systems?’ Under definitions and principles of justice, AI systems lack the moral powers and the capacity for action necessary to be considered responsible for insult AI cannot determine what is justly owed to them and what they owe to others, abide by a social contract (Plato, 1943), or perform perfect duties (Kant, 2006) in the same way humans can. Nussabaum’s (2004) suggestion interprets justice as involving a kind of reciprocity. Do algorithms owe us justice? If these principles can be programmed, does the blame then shift to the programmer?

Hobbes: The Social Contract

Commonwealth is instituted when all parties agree in the following manner: I authorize and give up my right of governing myself to this man, or to this assembly of men, on this condition; that thou give up, thy right to him, and authorize all his actions in like manner (Hobbes, 1651). We are motivated by desires, which are unlimited. Resources, however, are limited. This reality evokes fear of insecurity, and potential anarchy. To escape these fears we agree to limit our freedoms by vesting in a sovereign entity, and thus enter what Hobbes calls the ‘social contract’. AI does not hold these same fears, so what binds it to the social contract? AI is not a citizen and does not have personhood, so who is accountable for its actions?

The Algorithmic Social Contract, mentioned briefly by Luke Thorburn at the Colloquium, is a conceptual framework for the regulation of AI and algorithmic systems (Rahwan, 2018). Rahwan proposes the need for tools to program, debug, and maintain an ‘algorithmic social contract’ - a pact between various human stakeholders - mediated by machines. He refers to a concept called society-in-the-loop (SITL), which combines the HITL control paradigm with the Hobbesian social contract. In our attempt to regulate AI we must first consider existing theories and laws, and adapt these accordingly for emerging technologies, rather than begin from ground zero.

Impacts of AI on our Relationship to Law

AI is a threat to democracy and the legitimacy of law. It is also known to amplify biases at scale (Whittaker, 2018). The impact of AI on democracy is evidenced through misinformation, censorship, polarization and microtargeting (Kaplan, 2019). Jake Stein posed a topical question at the Colloquium: can we disintermediate existing platform power in the face of the surveillance capitalism arms race? The reigning economic system, centered around the capture and commodification of personal data for the purpose of profit-making, is a clear threat to democracy.

The legitimacy of judicial institutions is founded to a large extent on their moral authority. Courts command moral authority because they are seen to respect the individual. Since machines lack first-person subjectivity, machines know nothing of such things (Zuckerman, 2020). As a result, AI decision-making may lead to an ever-widening gulf between machine law and human conceptions of justice and morality; to the point where legal institutions cease to command loyalty and legitimacy (Rodrigues, 2020).

AI can be biased due to the teams developing it, or the data sets used in its training. AI will not fix the problems we fail to solve ourselves - it will instead perpetuate and amplify them at scale. Alexandra Houston discussed the ethics of AI at the Colloquium, presenting on current challenges around emerging biotechnology. Technologies can restructure our physical and social worlds, and influence how we live our daily lives. Recognition of this reality is needed to inform our adoption of new technologies, and direct our judgements and choices regarding them (Winner, 2014). I fear that our responses to the ethical and social implications of AI are all too often reactive, prompting action based on unforeseen side effects and secondary consequences. Let us work to be proactive in guarding against the risks of AI in the pre-production and production stages of its development, rather than as an afterthought to be actioned in post-production once the damage is done.

How do we Regulate AI?

The Colloquium was titled Goodenough-Oxford Joint Colloquium on AI and Regulation, and yet, those assembled discussed other ‘routes to recovery’ which included but were not limited to the following:

  • Decentralization

  • Technological remedies (i.e. mobile app extensions)

  • Access to data for research

  • Literacy training

  • Bottom up approaches (increased autonomy and emphasis on user empowerment)

  • Open access to data (a viable solution which allows researchers to gain knowledge of how big tech functions, with openness as an antidote to closed monopolies)

As countries continue to compete to attract the AI industry and accelerate AI development we must also work to increase regulatory oversight. It is crucial that we mitigate societal risks associated with current technological developments. Konrad Kollnig asked: how are we to regulate competition and avoid monopolies in an AI driven platform age? I propose some broad solutions:

  • Algorithmic auditing

  • Certified ‘respectful’ systems

  • Programmed intuitions

  • Explainability through research and dissemination of knowledge

  • National and global responses

  • Policy and ethical frameworks

In conclusion, the lack of AI governance has caused a gray area wherein the chain novel of law, and our relationship to it, is being skewed. The gap between AI and humans may widen to a point where people become alienated from legal institutions, leading to loss of legitimacy and damage to the rule of law and the social contract. In order to benefit from what AI has to offer, without undermining the foundations of our legal and social institutions, we must moderate the risks posed by AI.

Bibliography

Aristotle, Ross, W. D., & Brown, L. (2009). The Nicomachean ethics. Oxford: Oxford University Press.

Hobbes, T. (1651). Leviathan, or, The matter, forme, and power of a common-wealth ecclesiasticall and civill. London: Andrew Crooke.

Hume, D. (1983), An Enquiry Concerning the Principles of Morals, Chapter 3. Indianapolis: Hackett, cited as E.

Kant I (2006) Immanuel Kant: practical philosophy, ed. and trans. Gregor MJ. Cambridge University Press, New York

Kaplan, Karl. Manheim, Lyric. (2019). Artificial Intelligence: Risks to Privacy and Democracy. Artificial Intelligence: Risks to Privacy and Democracy | Yale Journal of Law & Technology. https://yjolt.org/artificial-intelligence-risks-privacy-and-democracy.

Nussbaum, Martha C. (2004). Beyond 'compassion and humanity': Justice for nonhuman animals. In Cass R. Sunstein & Martha Craven Nussbaum (eds.), Animal Rights: Current Debates and New Directions. Oxford University Press. pp. 299–320.

Plato. Plato's The Republic. New York :Books, Inc.,1943.

Rahwan, I. Society-in-the-loop: programming the algorithmic social contract. Ethics Inf Technol 20, 5–14 (2018). https://doi.org/10.1007/s10676-017-9430-8

Rawls, John. (1971). A Theory of Justice. Cambridge, Massachusetts :The Belknap Press of Harvard University Press.

Rodrigues, Rowena. (2020). Legal and human rights issues of AI: Gaps, challenges and vulnerabilities, Journal of Responsible Technology,Volume 4 100005, ISSN 2666-6596, https://doi.org/10.1016/j.jrt.2020.100005

Seymour, W., Van Kleek, M., Binns, R., Murray-Rust, D. (2022). Respect as a Lens for the Design of AI Systems. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (AIES ’22), August 1–3, 2022, Oxford, United Kingdom, https://doi.org/10.1145/3514094.3534186

Whittaker, M., Crawford, K., Dobbe, R., Fried, G., Kaziunas, E., Mathur, V., ... & Schwartz, O. (2018). AI now report (pp. 1-62). New York: AI Now Institute at New York University.

Winner, Langdon. (2014). Technologies as Forms of Life. https://doi.org/10.1057/9781137349088_4

Zuckerman, Adrian. (2020). Adrian Zuckerman: Artificial Intelligence – Implications for the Legal Profession, Adversarial Process and the Rule of Law. UK Constitutional Law Association, https://ukconstitutionallaw.org/2020/03/10/artificial-intelligence-implications-for-the-legal-profession-adversarial-process-and-the-rule-of-law/