03 May 2020

The EC white paper on AI: Will this be the defining moment in the regulation of AI?

Mark Lewis argues that a sensible balance can be struck between encouraging the development of AI and protecting citizens from harm

Shutterstock
The European Commission's Brussels headquarters: 'The commission recognises the need for sufficient flexibility in its approach ‘to accommodate technical progress'

The publication of the European Commission’s white paper, On Artificial Intelligence – A European Approach to Excellence and Trust, delivers a good opportunity to re-evaluate the challenges in, and approaches to, regulating AI.

Being a white paper open to consultation until 19 May 2020, of course it may not reflect the commission’s ultimate views on its approach to regulating AI. But the white paper shows that much detailed thought has gone into the options for, and the constructs and mechanics of, regulating AI within the EU. In any event, in due course we shall see how moveable the commission’s thinking is.

The commission’s approach to AI in the white paper will be familiar to those who have studied and applied the EU acquis in the information society area: recognise the good that may come of new technologies or technology business models and therefore the need to encourage them – for example by encouraging research – but also recognise their wider societal impact and anticipate their potential harm, and therefore see the need to regulate them ex ante.

Given the challenges in regulating fast-paced technologies like AI, what are the main risks in regulating it as the commission proposes? Whatever approach the commission ultimately takes in regulating AI, it is likely to be highly significant for world trade.

There is no consensus outside the world of regulation on a definition of the various types of AI

The commission notes that the EU ‘can leverage regulatory power, stronger economic and technological capabilities, diplomatic strengths and external financial instruments to advance the European approach and shape the global framework. The European Union is and will remain the most open region for trade and investment in the world, but this is not unconditional. Everyone can access the European market as long as they accept and respect our rules.’

So, leaving aside the ‘excellence’ element of the white paper, the ‘trust’ element of the commission’s approach to AI is contained in section five of the white paper, and in particular 5c: ‘A key issue for the future specific regulatory framework on AI intelligence is to determine the scope of its application. The working assumption is that the regulatory framework would apply to products and services relying on AI. AI should therefore be clearly defined for the purposes of this white paper, as well as any possible future policymaking initiative.’

The commission recognises the need for sufficient flexibility in its approach ‘to accommodate technical progress while being precise enough to provide the necessary legal certainty’.

Yet it appears to suggest adopting a technical definition of the main elements of AI based on two models: one taken from its communication Artificial Intelligence for Europe; the other from the High Level Expert Group on Artificial Intelligence.

Both models seem to me to be potentially stultifying and unnecessary in their complexity: after all, there is no consensus outside the world of regulation on a definition of the various types of AI1.

What chance is there of regulation based on such definitions achieving the commission’s stated objectives?

Ostensibly more promising is the commission’s intent to regulate potentially high-risk AI in two ways. First, according to its application in ‘specifically and exhaustively listed’ sectors, such as healthcare, transport and energy. Second, according to whether ‘significant risks’ are likely to result, for example AI applications that ‘pose risk of injury, death or significant material or immaterial damage’.

There could also be additional, intrinsically high-risk, instances of AI that would need to be regulated. For example, the use of AI in recruitment and employment situations affecting workers’ rights, and in ‘remote biometric identification’ and ‘other intrusive surveillance technologies’.

There follows in Section 5d quite detailed consideration of the kind of mandatory legal requirements that could be imposed on those deploying AI, by reference to the following ‘key features’:

  • training data;

  • data and record-keeping;

  • information to be provided;

  • robustness and accuracy;

  • human oversight; and

  • specific requirements for inherently high-risk AI applications, e.g. in remote biometric identification.

The remaining sections of the white paper concern the mechanics of regulation, including to whom and how the regulation would apply (‘addressees’). Readers should note that the proposed regulation would extend to all those providing AI products and services in the EU, whether or not they are established in the EU.

The proposed regulation will therefore be addressed to, amongst others, US and Chinese AI technology providers.

Overall, the commission’s proposals, though intended to be flexible, dynamic and principles-based, must also, as rightly stated by the commission, deliver a high level of legal and regulatory certainty. I think that outcome must demand a significant level of detail and quasirulemaking: in effect, ‘command-and-control regulation’2.

Even if the commission does not adopt either or both the AI definitions mentioned above, and instead adopts some broader definition for its regulation, there is still the question of its risk-based approach.

The outline I have given here should indicate that, if adopted largely as is, to achieve the level of required legal and regulatory certainty, that approach would necessarily have to be capable of wide-ranging application, but also highly detailed, as would the mandatory legal requirements listed in Section 5d: again, in effect, ‘commandand- control regulation’.

So, recognising, as many authoritative academics, commentators, practitioners and governments do, the challenges – some would say futility or, worse, counterproductivity – in regulating by ‘hard law’ fast-paced technologies like AI, what are the main challenges in regulating AI as the commission proposes?

The two key risks in regulating AI are the ‘pacing problem’ and the ‘uncertainty paradox’3.

The pacing problem arises when technology develops faster than regulation

As its name implies, the pacing problem arises when technology develops faster than regulation, or the ability of regulators to keep pace with technological innovation.

This comes about, among other reasons, because of:

  • the sluggishness of the regulatory process, especially if it involves primary legislation (look how long EU measures take);

  • the limited knowledge of regulators about rapidly developing technologies such as AI. This is not intended as a criticism of regulators: after all, how can they be expert in these technologies? There are constraints on engaging external experts, and one cannot expect such experts to be embedded in the legislative process itself. (The commission does not, apparently, suffer from this last problem.);

  • paradoxically, information overload, which makes it difficult for regulators to select the most salient information from masses of data in deciding if and how to act; and

  • the uncertainty of the direction of emerging technologies and therefore the lack or only partial visibility of the risks for which regulators should legislate (which the commission has acknowledged in the white paper).

The uncertainty paradox (also called the ‘Collingridge dilemma’) arises as regulators face the following challenges.

When emerging technologies appear or are in their early stages of development, and therefore their harmful or beneficial impacts are unclear and uncertain, early ex ante regulation is claimed to stifle innovation and could arrest the development of the potential benefits of those technologies. This largely explains the US approach to the regulation of AI.

But if regulators ‘wait and see’ before regulating – in effect, achieving clarity and reducing uncertainty – it will probably be too late for effective regulation, as the maturing technology is by then not only embedded in various processes, but in society itself.

Moreover, by then, the public might actually want or be willing to take its own cost-benefit view of the technologies. And so the public might trade largely (then) unknown risk against wrongly perceived rewards: the ‘bargain’ between digital citizens and online providers for the use of citizens’ personal data and the intrusion into their private lives says it all, even in post-GDPR society.

And how easy would it now be to repeal Section 230 of the US Communications Decency Act 1996?

Underlying much reticence to regulate important emerging technologies like AI is governmental concern about ‘innovation arbitrage’4.

Put simply, innovators and entrepreneurs will take their innovations and entrepreneurship where legal and regulatory regimes pose fewer, or no, challenges, or to countries or regions where there are positive incentives to establish their technology ventures, or both.

Acceptance of innovation arbitrage typifies the US approach to AI regulation, both at federal and state levels: encourage, incentivise and only regulate ex ante where you think you must, as with autonomous vehicles.

The commission also recognises this challenge, but clearly takes a different approach to regulating technologies. There are various arguments as to why AI poses greater, even unique, challenges to regulators compared with other emerging technologies.

These include:

  • the difficulty in capturing where AI development occurs in a globalised world;

  • The dispersal geographically of teams developing the AI code and algorithms;

  • even if in the same location, the diversity of the supply chain engaged in the development of AI, for example third-party testing results on databases;

  • the insurmountable – as some would say – difficulty of opening ‘black box’ AI to achieve transparency and explainability;

  • in ex post regulation, the apparent autonomy of AI, which might blur legal foreseeability, remoteness and causation in attributing culpability and recourse for AI deployment5.

I think that all these challenges can be overcome by sensible, measured regulation. Clearly, the commission has thought hard about them. But whether it can achieve its objectives in effectively regulating AI for the benefit of EU-kind while balancing the development of a thriving AI industry and encouraging innovation arbitrage remains to be seen.

The purpose of this article is to alert readers to the current EU proposals, to encourage you to delve into the detail of the white paper, and to make your contribution to the debate by submitting your views to the commission by 19 May.

Mark Lewis is visiting professor in practice at the Department of Law, London School of Economics & Political Science

Endnotes

1 For an outline of the five major types of AI, see Margaret A. Boden, AI Its Nature and Future, Oxford University Press, 2016, pages 6 and 7.

2 Soft Law for Hard Problems: The Governance of Emerging Technologies in an Uncertain Future, Ryan Hagemann, Jennifer Huddleston Skees and Adam Thierer, Colorado Technology Law Journal, written 5 February 2018, page 41.

3 For useful and accessible discussions of the pacing problem and the uncertainty paradox (though expressed differently) , see generally The Regulation of Artificial Intelligence – A Case Study of the Partnership on AI, Gijs Leenders, Becominghuman.ai, 13 April 2019, and Soft Law for Hard Problems: The Governance of Emerging Technologies in an Uncertain Future, Hagemann, Huddleston Skees and Thierer noted above.

4 See Soft Law for Hard Problems: The Governance of Emerging Technologies in an Uncertain Future, Hagemann, Huddleston Skees and Thierer, noted above, at page 71.

5 Summarised neatly in the Regulation of Artificial Intelligence – A Case Study of the Partnership on AI, Leenders, as noted above, citing Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies and Strategies, Matthew U.Scherer, Harvard Journal of Law & Technology, Volume 29, Number 2, Spring 2016, pages 353-400. The pacing problem arises when technology develops faster than regulation 


related topics