David Neuberger
David Neuberger The Rt Hon The Lord David Neuberger of Abbotsbury HonFRS

Introductory talk

Introductory talk

When the internet was launched, it was seen as a force for good, enabling everyone across the world to be informed and to communicate. Its potential risks and downsides were largely ignored. It was only when its downsides, such as trolling, fake news, and the dark web, came to light that the problems became appreciated. And that is a major reason why the law, which should be curbing such activities, is woefully lagging behind. And I would suggest that the internet also shows that you can’t leave regulation to the markets.

Not least because it is far more multi-faceted, AI is taking far longer than the internet to reach its full potential. If it is ever achievable, artificial general intelligence (AGI), is I understand, maybe many decades away. However, even without AGI, AI has a considerably greater potential for change than the internet. So, if there is to be regulation we should certainly be preparing for it. And, while we are doing so, it is a daunting task. That is for a number of reasons:

  1. The law moves slowly, and even if it can be said to be developing more slowly than the internet, AI develops very fast.
  2. While the situation is getting much better, lawmakers, lawyers and regulators are embarrassingly ignorant about the nature and workings of IT – and those at the cutting edge iof AI are not generally interested in the law;
  3. AI throws up new and challenging legal and regulatory issues, and while we should be preparing, we should not be rushing our fences – and we should be ready to adapt;
  4. The more regulation and transparency that is required, the greater the threat to a machine’s effectiveness;
  5. Seeking regulation and transparency in the decision-making process has obvious difficulties with “black box” AI – or intermediate layers;
  6. The same problem arises because of the fact that AI-owners are unwilling, mainly for commercial reasons, to disclose the program coding they have used;
  7. Laws are useless unless they are enforced, and enforcement requires will, money and expertise, which governments are generally reluctant to pay for or commit to;
  8. IT companies pay very little if any tax in the countries where they make their profits, shutting off the obvious source of funds for regulation and enforcement;
  9. Regulation can advantage established big players, as it makes hard for new or small companies to enter or stay in the market;
  10. There is often no single objectively “right” answer to ethical questions;
  11. AI issues, like Internet issues, are global – indeed they extend into space - and international agreement is notoriously hard to achieve.

AI will both create new asymmetries and reinforce pre-existing, asymmetries in the relationship between service provider and consumer, with implications in many fields, including privacy, data protection, human autonomy, freedom of movement/thought/conscience/speech, freedom from discrimination. It also risks invading human autonomy, and undermining dignity – e.g. by nudging, unethical advertising, fake news, job-selection. And the same is true of surveillance technology such as face-recognition, biometric identity systems, and emotion recognition technology. As a result, AI can also undermine trust and confidence.

There are other problems. Coding algorithms can lead to a freezing of relationships which in practice would and should naturally change, often quite quickly and substantially. Also, the data-collection, selection of proxies, and algorithm-coding involves humans and will therefore reflect their unspoken biases, and a small human bias can lead to a really out-of-kilter outcome. Indeed, such bias has been described as a “a touchpoint of vulnerability in the AI ecosystem”. And, when it comes to ML when might find machines are adopting approaches which by our ethical standards would be regarded as equivalent to unacceptable biases.

There are more subtle problems too. If we reach a stage where almost everything we do is overseen by AI and effectively regulated through AI, we will have reached the ultimate world of box-ticking (I suppose it would be black box-ticking). Our sense of right and wrong, even our sense of identity and dignity, will not merely be challenged, but will effectively be at risk of withering away. And, uncomfortable though it might be for an ex-judge to admit it, it is essential that human life includes the opportunity to break the rules. After all, the whole basis of evolution is that things go wrong. And of course, AI also gives rise to monopoly and IP problems.

These problems will have to be dealt with, indeed are already being dealt with, by legislators, regulators, and judges. Jamie Susskind has said in his recent book The Digital Republic, that the principle aims of regulation of AI are (i) to make technology answerable, (ii) to disperse it (iii) to restrain it, (iv) to ensure that it reflects the moral and ethical values of society, and (v) to ensure that it complies with the law.

Many aspects of these aims can be addressed by answering the overriding problem of how we ensure that AI produces ethical and legal outcomes in an ethical and legal way. As the 2019 report of the Institute of Electrical and Electronics Engineers, IEEE, Ethically Aligned Designs, says “If machines engage in human communities as quasi-autonomous agents, then they must be expected to follow the community’s social and moral norms” and “different types of technical embodiments will demand different sets of norms”, and accordingly those designing, developing, using, maintaining, distributing and decommissioning AI should ask “key ethical questions” as part of risk impact assessment.

The purpose of regulation in this context is to minimise the risk of decisions which are unlawful or unethical in terms of outcome or procedure. However, when it comes to questions of procedure in relation to AI decisions – i.e. how the decision was arrived at, a problem which will often arise is that the machine’s algorithms will involve “unaccountable power” because its “black box” technology means that we cannot know how a it arrives at its conclusions – or it requires a disproportionate amount of work to find out.

The concern over the “black box” problem is a little ironic. We know what goes into an ML robot, and we can, at least in theory, see how even that robot reaches its decisions, by looking at the electronics. But we cannot now even in theory examine the human brain to see how it reaches a result. Of course, decisions which affect people have to be justified by reasons, but can we be confident that the stated reasons are the true reasons – either subjectively or objectively? Even when the decider is a judge. 250 years ago, Lord Mansfield, one of the greatest English Chief Justices, was asked for advice by an army officer, who had been appointed governor of an island in the West Indies, where he would have to administer justice. Lord Mansfield said: “Decide promptly, but never give any reasons. Your decisions may be right, but your reasons are sure to be wrong”.

So, it might be said: why try and regulate black box reasoning if we don’t regulate human reasoning? Well, we can interrogate and test human reasoning up to a point, and we do. It’s why judges and public officials have to give reasons, despite what Lord Mansfield said. And, unlike humans, robots’ decision-making can be rigorously and reliably tested without violating their dignity, and we can get accurate responses when we do so. And people trust and are used to human decisions; that is not true of machine decisions, which many people would be very uncomfortable to be conclusively categorised, assessed or judged by.

But, to digress slightly, not only will people get used to machine-made decisions, but in due course AI will presumably develop so as to be able to mimic accurately human morals and emotions. At that point, it could be said that machines will be equivalent to people. The fact that their emotions are developed by electronics rather than by neurons would not render their emotions any less genuine, just as the fact that I express my emotions in English does not make them less genuine than if I expressed them in Italian.

Another point concerns the convenient myth of neutrality – the view that an algorithm is just if it treats everybody in the same way. It sounds fair and it is an attractive option for big tech as it is very easy to apply. However, as anyone concerned with discrimination knows, justice very often requires different people should be treated differently – children being an obvious example. I think lawyers should appreciate that neutrality is not acceptable, but, as I have said, most lawyers are a bit intimidated by AI.

Given the ignorance of lawyers and other non-IT specialists, Lord Sales, a UK Supreme Court Judge rightly said that “Effective solutions to shared problems depend more and more on technical expertise, so that there has been a movement to … rule by technocrats using expertise which is not available or comprehensible to the public at large. … It has the effect that the traditional, familiar ways of aligning power with human interests through democratic control by citizens, regulation by government and competition in markets, are not functioning as they used to”. We have to train legislators, regulators and lawyers so that they can play a proper part in the legal, ethical and regulatory management of AI.

Because of their ignorance, many lawmakers, regulators and lawyers have a rather vague and overblown idea of what AI and ML are. The words “intelligence” and “learning” are misleading as they give the impression that such machines think like humans. But such machines are simply based on pattern-matching technology and they serve to decrease cost, increase speed and improve the accuracy of prediction. AI is able to perform tasks at great speed and in relation to huge amounts of data, well beyond what is practicable or even possible for human beings. They give rise to a form of power which raises new challenges for the law, in its traditional roles of defining and regulating rights and of finding controls for illegitimate or inappropriate exercise of power.

I agree with the traditional view that, at least in general, it is necessary to ensure existing legal rules and ethical values are applied or adapted so as to apply effectively to AI. However, this will not be a one-way process, in that I think it inevitable that AI will change some of our legal and ethical principles – i.e. there will be what I would call feedback factors. When IT first started to impinge on our working processes, many of us made the mistake of thinking that we need to adapt IT systems to comply with our working processes without adapting our working processes to comply with IT systems. The same is true of AI and our ethical and legal principles, although we must of course be very careful in that connection.

I think the need for new laws to deal with AI is exaggerated, given that we will generally expect AI to conform with existing laws. In many cases, no new laws will be needed – especially in a common law system such as we have in England, where the Judges have some power to extend and adapt the law. However, there is no doubt that some new law will be needed, and probably significantly more regulation. Further, feedback factors may result in AI causing us to change some laws and regulations.

There are various obvious areas of law which will come into play. Thus, we will need to decide who should be liable for harm caused by AI - for instance, an accident in which a driverless car is involved which may have been due to faulty AI. Or where there is a payment of far too much cryptocurrency because of a glitch in an algorithm (an actual case in Singapore). And in other cases we may need to decide whether anyone should be liable – e.g. where AI is forced to make a difficult choice, as in the case of a driverless car having to choose between hitting a child or a pensioner. This gives rise to two questions (no doubt, among many others): How should liability be assessed? And who should be liable?

The common law recognises a duty in tort – i.e. a duty to take care and enables someone to recover damages if they suffer as a result of another not taking care (e.g. careless driving, negligent professional advice, poor design). However, that would not cover damage which results from an error which was not careless. It may be that when the damage is caused by AI the test should be stricter, and for instance the law of breach of trust or of strict product liability should apply. In 2017 The European Parliament approved another solution - a compulsory insurance scheme, as with cars, which producers and owners of robots would be obliged to take out to cover damage potentially caused by the robots.

A possible feedback factor is encapsulated in the question whether the fact that AI can accomplish a task with enormous speed and minimal risk affect our approach to assessing whether a human’s attempt to carry out the same was carried out ineptly. It seems likely to me that the answer is yes, at least to the extent that the human would be vulnerable to the argument along the lines of saying that, if he chose to do the task when he could have farmed it out to a machine, he owed a duty to do it at least as well and reliably as a machine would have done.

Turning to the question who should be liable, should it be the AI’s owner, if it has one, or the faulty data input collectors, the dodgy hardware makers, or the imperfect algorithm designers? In 2019, the European Commission accepted a recommendation from an independent expert group that liability rests with “the person who is in control of the risk connected with the operation of emerging digital technologies and who benefits from their operation” and that duties include designing/choosing the right system, monitoring it, and maintaining it. However, if we applied normal English law principles, the answer would turn in the facts of the particular case – a familiar response to a legal problem.

By contrast to the 2019 recommendation, the European Parliament 2017 proposal included a suggestion that also proposed creating a specific legal status for robots, so that at-any-rate-more-sophisticated-autonomous-robots could have the status of electronic persons, especially where a robot makes autonomous decisions. I am rather queasy about giving AI legal personality and doubt whether it would be ethically appropriate or financially practical, but it’s fair to say that it has long been bestowed on companies.

Apart from liability for damage, there is the at least equally important question to which I have already referred, namely the question of ensuring that machines are procedurally ethical and law-abiding in their decision-making, both in terms of process and in terms of outcome. This involves invoking three principles. Namely fairness, accountability and transparency – or explainability as it is often called.

The approach to legislation and regulation should involve these three primary factors. Fairness requires that AI must if possible be held at least to same standards as human decisions. Explainability is the ability to communicate to any end-user in plain language elements relevant to an AI system, including data, algorithms, the business models and outcomes. It includes traceability, which means keeping records. Accountability requires both process, and outcome, transparency. Nonetheless, when it comes to AI, full transparency in every case is neither possible (because of the black box problem) nor desirable (as sometimes it is right for the public not to know). However, whatever the position in relation to individual cases, AI companies should produce transparency reports.

It seems obvious that a victim of a wrong or arguably wrong decision is entitled to an explanation as to why the decision was arrived at, but that is also true not merely of weird, or at least apparently weird, decisions (which we will, I suspect increasingly get with AI and ML). Explanations are also needed to make sure that rights are not being infringed by the process. Article 22 of the EU’s GDPR 2016 provides that a “data subject “has the right “not to be subject to a decision based solely in automated processing, including profiling, which produces a legal effect concerning him or her or similarly affects him or her”, which I think would be too high a perfectionist hurdle if it is intended to apply in every case. And 2020/2021 EU Papers suggest a more realistic approach – dividing AI into “high risk” and “non-high risk” categories, and implying that lower standards would apply to the latter category. And the Singapore approach accepts black box decisions where unavoidable and concentrates in such cases on ex post testing- i.e. judging by the decision itself, rather than also by reference to the decision-making process.

To an English public law practitioner this is interesting, as decisions of public bodies are traditionally judged more by reference to the lawfulness and reasonableness of the procedure and reasons, rather than of the outcome. However, over the second half of the last century the courts were becoming gradually more ready to assess the decisions themselves (by expanding the concept of rationality) of public bodies’ decisions. And with the introduction of human rights into English law in 2000, courts have been markedly more ready than they were to assess the defensibility of the outcome of public bodies’ decisions. And now it appears that the advent of AI may give the Courts another push in that direction.

Regulation needs, I think, to be at least as much ad hoc as general. The devil is inevitably in the detail of the specific AI involved – its specific purpose, the data selection, the algorithm-designer etc. In addition, even ethical standards are contextual. Taking account of someone’s ethnicity might be acceptable for anaemia testing, but it would be an anathema for job selection.

Because digital processes are more fixed in their operation than the human algorithms of law, and because they operate with immense speed, we need to focus on ways of scrutinising and questioning the content of digital systems at the ex ante design stage. We need to get the data collector and algorithm designer to sit down with lawyer/ethicist. In the end aim is one of fairness. For example, the input needs to be checked for biases or discrimination based on characteristics such as gender, sexuality, age, ability – what the law refers to as protected characteristics. And, especially in the case of high-stakes/high risk algorithms, there may be something of a need for certification – either by the state that the AI is fit for its purpose, or by the owner that the AI complies with relevant rules, before it is deployed. And maybe there should be exams for data collectors, algorithm designers, machine operators and supervisors, to ensure that they are properly aware about law and ethics before they can work in AI – i.e. a cadre of AI professionals.

We also need to find effective mechanisms to allow for systematic ex post review of how digital systems are working and, so far as is possible without destroying the efficiency gains which they offer. We have to allow for ex post challenges to individual concrete decisions which they produce, to allow for correction of legal errors and the injection of equity and mercy. If an ex post analysis shows that the results cannot be explained in human terms – ie in terms of causality or common sense – then it seems to me that the issue would be whether the results can nonetheless stand as being ethically acceptable, which may depend on the facts and the context.

More broadly, in order to plan for the future, I wonder whether it would be a good idea for every country to have some sort of Office of Technology Assessment (as was created in the US in 1972 and wrongly abandoned in 1995), which consists of experts thinking about the future, and gives advice to law-makers and others. At the other end of any regulatory system, there should be accessible, efficient, low-cost, expert tribunals to deal with any complaints from those who claim to have been unfairly treated by AI – or whom claim that an AI system does not comply with the law.

Ethical principles need to be internationalised if they are to be effective in many cases, albeit that there will be variations given differing social and cultural standards. Anyway, currently, as with Intellectual Property, there is not a universal approach to AI ethics. If it happens, it will probably arise from national efforts. There are obviously difficulties in the present global climate. At its most extreme, it’s all very well to have rules for AI in warfare, but is any country going to trust all other countries to observe the rules?

One other aspect which is very important, arguably more important than anything else, is education. As far as possible people should be protecting themselves rather than looking to the state to protect them. That is what human dignity and human autonomy requires. We should be educating the young – and indeed the old – to fend for themselves in the AI world just as in every other world in which they live and expect to live

May I end by referring to my own world of the judicial tribunals. In due course, I believe that a machine will be able to resolve disputes of fact or law at least as well as a judge or arbitrator, and far more quickly and cheaply. Initially, I think people will want to know how the machine has arrived at its decision, and, whether or not they are told, they will want the machine’s decision to be considered by a human as the final arbiter. But if and when it becomes clear that the machines are up to the job, I suspect that the demand for any final human input and even for information as to how the machine arrived at its decision may wither. That would be a very good, and to me slightly frightening example of what I have called the feedback factor, as well as being an example of the possible humanity-sapping effect of AI.

David Neuberger

Venice, 24th March 2022