17 Sep 2020

LegalTech needs an AI measuring stick

The legal industry needs a standardised system to measure AI in the technology it employs, argues Dr Lance Eliot


Lance Eliot

AI is increasingly being applied to the field of law and the legal industry, including that many LegalTech providers are inexorably augmenting their wares via advances in Machine Learning and Natural Language Processing.

Unfortunately, it is difficult for legal professionals to readily discern what value this AI adds. The vendors are apt to either overstate the capabilities of the AI-empowerment or proffer outsized marketing absurdity that suggests the tech has miraculously become a vaunted “robo-lawyer” (implying a semblance of sentience or outstretched superhuman abilities).

One of the principal reasons that this state of affairs exists is the lack of a convenient and readily usable way to gauge the degree of AI being infused into these emerging LegalTech offerings. As astutely opined by the famous management guru Dr Peter Drucker, you can’t manage that which you can’t or don’t measure.

In short, there is no existing standard for the various levels of how AI can be applied to the law and that leaves a gap in sizing up computer-based systems within the legal industry. It would be immensely prudent for stakeholders to establish a reasoned set of measures for the extent that AI is being employed.

Shifting gears, we can consider how other domains have dealt with a similar conundrum: consider the case of AI-enabled autonomous vehicles.

There is a worldwide standard for self-driving cars that has been established by the globally recognised SAE (Society of Automotive Engineers) standards group. You might be aware of it and its various stated levels of autonomy.

For example, an existing Tesla using its Autopilot software is considered to be semi-autonomous and rated as a Level 2 on the official SAE scale, which ranges from zero to five. At Level 2, the vehicle requires that a human driver must be present and attentive at all times for the driving task. The AI is considered an Advanced Driver-Assistance System, merely assisting the human, and is decidedly not operating truly autonomously.

Waymo is known for its pursuit of Level 4 self-driving cars. Per the SAE standard, a Level 4 is rated as an autonomous vehicle, meaning that there is no need for a human driver and no expectation that a human driver is at the wheel. Level 4 is somewhat restrictive, in that the AI driving system is only allowed to drive in particular settings, such as exclusively in sunny weather or specific geography of a defined geo-boxed area.

The topmost level of autonomy is Level 5, which exceeds the capabilities of Level 4 by stipulating that the AI must be able to drive in essentially any circumstance that a human driver would potentially be able to drive a car, thus removing the restrictions associated with Level 4.

The advantage of such taxonomy is that it’s a succinct way to communicate autonomy. It also makes it relatively easy to compare various brands of self-driving cars, merely by indicating the level that each has attained. Something similar is needed in the arena of applying AI to the law. 

Why reinvent the wheel? Let’s adjust the system used to measure AI autonomy for self-driving cars to fit the AI and law domain. Based on my research, seven levels of autonomy should be used to show the extent of automation and AI used in legal discourse and reasoning:

  • Level 0: No legal automation
  • Level 1: Simple automation for legal assistance
  • Level 2: Advanced automation for legal assistance
  • Level 3: Semi-autonomous automation for legal assistance
  • Level 4: Domain autonomous legal advisor
  • Level 5: Fully autonomous legal advisor
  • Level 6: Superhuman autonomous legal advisor

Briefly, the levels range from zero to six, with the least amount of automation designated zero and the most six. This numbering is easy to remember and provides a comprehensive range to encompass all degrees of AI embellishment.

Most of today’s LegalTech would rate as a Level 1; those that have extended their software with some amount of bona fide AI add-ons would possibly rate as a Level 2.

Level 3 consists of semi-autonomous automation that provides legal assistance, which today is mainly done on a pilot or experimental basis and represents the cusp over which the next step lands incrementally into the actual autonomous territory designated as Level 4. At Level 4, the AI autonomy is considered restrictive and pertains to a subdomain of law, while at Level 5 the AI is considered fully capable across all avenues of the law.

There is also a futuristic Level 6, providing a level for the possibility, though presumably remote, that AI could someday eclipse human intelligence and become so-called “superhuman”. In contrast, the SAE self-driving car standard tops out at the human driving capability and does not portend that one day it might be feasible for cars to be driven by AI in a manner beyond that of human drivers (this is a matter that some critics say should be added to the SAE standard).

Overall, the notion presented here is that by applying an accepted convention of AI autonomy, yet judiciously transforming it to the needs of AI as applied to law, we can reduce the exaggerated claims and other unruly chatter about AI in the legal industry.

Dr Lance Eliot is chief AI scientist at Techbrium and a Stanford fellow at CodeX: The Stanford Center for Legal Informatics


This content is available to subscribers only. To continue reading...

Sign in to your account

Take a one-month free trial

If you aren't a subscriber, please sign up for a one-month free trial to access all Robotics Law Journal content, including:

  • All premium online content
  • Daily newsletters
  • Breaking news alerts


If you require further information, please email subscriptions@roboticslawjournal.com or contact call us on +44 (0) 20 7193 5801.