It’s a well-written book that comes across as highly accessible so that an interested reader from any background could understand the topics and follow his arguments, but there is enough depth in there for a specialist in the subject. The theme is strong throughout that AI is unique and unpredictable – it is unlike anything else that has gone before it, requiring new rules, regulations, and institutions to enforce them.
This book does not set out to write the rules on regulating AI per se, rather it examines the issues that occur when attempting to build regulations and rules on this new technology and puts forward principles to consider when creating the rules. There are legal and ethical questions to be answered in this area. Turner attempts to highlight these, bringing them all together in one place so that the new institutions and regulators will know what they are facing when writing the rules for robots.
Defining AI
It opens with an overview of AI, highlighting the unique features and challenges it poses, those that need to be legally accounted for. Turner points out from the start that it is curious that a book such as this is required to define its subject matter as a topic of debate in the first place – other books on general concepts don’t find it necessary to spend a chapter defining themselves. Therein lies a unique problem with AI: we still have not reached a consensus on what it exactly is, or, more specifically, where the boundaries lie between an intelligent system and an unintelligent one. Why is this important? In order to have a functioning legal system, the subjects need to understand the rules and their diverse implications.
For the purposes of the book, Turner does not provide a universal definition of AI that should be applied to any context. Instead he opts for one that is fit for purpose – that is, one that is suited for regulation, that highlights the unique factors that need to have rules written for them. Turner uses the following definition of AI: “Artificial Intelligence is the ability of a non-natural entity to make choices by an evaluative process.” I would certainly agree that defining AI for specific contexts – at this stage at least – is more useful than trying to get a universal agreement on what constitutes AI, and Turner is very thorough in analysing other definitions before fully explaining every part of his own and how it will feed into the rest of his arguments.
Challenging legal concepts
Turner outlines the more sceptical view of AI, that it is not quite so difficult to maintain laws over it as public perception holds. These sceptics would argue for a gradual or incremental approach to regulating robots as we already have legal principles in place that need only to be adapted to a new technology such as AI. Turner does not share this view, instead arguing that AI is unlike anything we have seen or regulated before. It poses unique challenges that our existing legal structures can’t effectively handle.
What follows is a thorough examination of the unique features of AI that require a more dramatic shift in legislation, such as how AI will be called upon to make moral choices. The obvious example of this is the classic thought experiment of The Trolley Problem. Even though the situation might appear extreme and unlikely to occur in actuality (excepting the case of self-driving cars, of course), The Trolley Problem highlights the mechanisms that are behind every moral decision that an intelligent machine will have to make. His argument against the incremental method is convincing as he lists all the ways in which AI is truly unique and providing legal hurdles. That is to be expected as, after the initial discussion and repudiation, there are no counter arguments that favour the incremental approach. It could perhaps have benefitted from a longer discussion of the other methods, but that does not take away from the persuasiveness of his thesis.
Turner then delves into the legal aspects of regulating AI, spread across three major themes: responsibility for AI; rights for AI; and legal personality for AI. Within this structure, the reader can see how different areas of law pertain to different features of AI, which really strengthens the claims made of AI’s bold uniqueness.
Regulating the robots
The final three sections look at how to build the regulator and how to enforce the rules for both the creators and the creations. Turner uses the example of Asimov’s Laws (as must all books on robots and law!) to show why we should be building institutions before building laws for robots. Instead of writing the laws, the first question should be: “Who should write them?” The rest of the section is then taken up with discussions of the best ways to build up these institutions – looking at the benefits of doing it on a cross-industry basis, analysing the current trends in national and international AI regulation, and how to promote international cooperation and coordination of AI regulation, before finishing with a look at some methods of implementation and enforcement of laws.
The next section concerns rules for the (human) creators, and Turner notes that, “Rules for creators are a set of design ethics. They have an indirect effect in that the potential benefit or harm that they are seeking to promote or restrain happens via another entity: the AI.” He then moves on to highlighting principles and guidelines released by organisations such as the Institute of Electrical and Electronics Engineers releasing their report, Ethically Aligned Design, EU Initiatives, Asilomar principles, and so on, before asking the question, does AI need to be regulated separately from the force of law? That is, can it be a set of internal regulations for an industry or should it be backed up by stronger forces of law? It’s an interesting question, one that gets muddied by the lack of a clear international consensus on the definition of AI and how to regulate it.
The final section then turns to the rules for the creations (the robots) themselves. A lot of this section is less revelatory than the previous one, as it is the area that is often most discussed in the public arena. An example of this is explainability of actions, often brought up with regards to autonomous vehicles or to medical AI or any where there is the potential for harm. In turn, this would then have an impact on liability and insurance. This is not to say that there are no surprises in the final section, with the discussion of the laws of identification proving highly illuminating.
Bias is another emerging discussion as human biases are liable to creep into AI programming when design teams are not diverse enough. Turner’s suggestion of AI undergoing a review for bias during the design stage, either by a specialist review panel or by an AI audit program is an excellent one, that may be able to do more than just providing diversity across the entire spectrum – it is more targeted and adds another layer of accountability. Ultimately, I agree with Turner’s conclusion that AI is unlikely to ever be 100% bias-free; instead, we should make sure that it reflects the values of the society that created it.
Conclusion
Overall, it is a well-structured and presented book, that achieves what it sets out to do. It points out in the epilogue that it is not writing the rules but more setting out a blueprint for future institutions and regulations. Perhaps it could have benefited from including a few concrete recommendations along the way, and I definitely feel that it needed a strong concluding chapter – the epilogue is a just short addendum after the final section. This is only a minor complaint as the discussions are thorough, but accessible. There is a pleasant lack of jargon and legalese obstructing the flow of the writing without losing depth of analysis, meaning expert and layman can both enjoy this book. There are detailed discussions and analyses of the issues that surround the regulation of artificial intelligence, without it feeling like a simply reiteration of what has already been said on the matter, aided by the clear and focused structure of the book and the regular use of case studies as evidence. Ultimately, Robot Rules does feel like a set of blueprints, one that could be followed as we regulate robots, that showcases the problems that AI might throw at us and what we can do to get around them.