06 Aug 2019

The Judgebots Are Here

In a somewhat surprising move, France has banned data analytics related to rulings by Judges, and there are calls for this to be extended to all lawyers.

By David Cowan

shutterstock
studiostoks / Shutterstock.com

 

The new law, Article 33 of the Justice Reform Act, is aimed at preventing anyone, especially legaltech companies focused on litigation prediction and analytics, from publicly revealing the pattern of judges’ behaviour in relation to court decisions. 

The new legislation bans such data analytics and creates a new offence with a tariff carrying a prison sentence of up to 5 years for violators. 

The legislation states “The identity data of magistrates and members of the judiciary cannot be reused with the purpose or effect of evaluating, analysing, comparing or predicting their actual or alleged professional practices.” The first ban is the first of its kind, and is a move that is reverberating around the  legaltech and data analytics business, but it is also coming into view for in-house legal departments and law firms. 

The French may push, but so too are developers. Randy Goebel, a professor in the computer science department of the University of Alberta in Canada, working with Japanese researchers, developed an algorithm that can pass the Japanese bar exam. The team is now working to develop AI that can “weigh contradicting legal evidence, rule on cases, and predict the outcomes of future trials.” 

This research and the related policy concerns raises a number of questions about the future of judgebots and other applications for Predictive Analytics (PA). To weigh some of te issues at stakre, RLJ offers two views: one a prominent practitioner, the other an academic.

Practitioner View

In a paper based on a speech given at the European Circuit Annual Conference in Stockholm, Charles Ciumei QC of Essex Court Chambers in London said the use of prediction tools “to assist human judicial decision making” was more achievable than “robot judges”. He suggests Artificial intelligence (AI) is likely to be used to lower the cost and increase the speed of judicial decisions. Mr Ciumei has advised on the misuse of algorithmic trading software by hedge funds. He noted litigation consultants in the USA and UK already offer “strategic litigation insight” based on the use of AI.

Mr Ciumei argues, “Although the prospect of robot judges may seem like a remote one, it is worth bearing in mind that substantially automated dispute resolution processes already exist - for example, in the consumer sphere, on eBay. In any event, a more achievable aim might be the development of prediction tools to assist human judicial decision making.” Given the stress on court systems, Mr Ciumei says “In the current climate, anything that offers the prospect of reducing the costs of the justice system is likely to attract the attention of policy makers.”

Referring to academic research, Mr Ciumei highlighted the use of  a machine learning algorithm to predict past European Court of Human Rights decisions, a task it performed with 79% accuracy (see below). He said, “In other words, decision-making at the admissibility stage of ECHR proceedings, at which the vast majority - about 84,000 in 2015 - of cases are sifted out, could be assisted by artificial intelligence, both saving time and therefore cost, and improving the quality of the system by speeding up final determinations.”

AI could increase settlements by improving the accuracy of outcome prediction and possibly reducing information asymmetry between parties with different levels of legal advice. However, Mr Ciumei pointed out limitations on “a future of machine-aided, or machine-driven, lawyering and judgment”. He said machine-learning models were “highly constrained and task specific”, which was why the “first and now most established legal application” was predictive coding of disclosable documents.

It would be “less of a stretch” to apply AI techniques to simple or high-volume cases, such as financial mis-selling or road traffic accidents, than more complicated commercial disputes. Mr Ciumei gave the example of commercial arbitration, which lacked “the comprehensive datasets” to train machine-learning models, and pre-dispute steps, such as settling pleadings or evidence, which lacked “binary win-lose outcomes”.

A “black box” outcome, Mr Ciumei stresses, is not amenable to appeal in the sense of examining the defensibility of an outcome against external criteria, such as the law and evidence. “The implication is that appeals against such automated decisions might have to be as of right and effectively re-hearings, which might undermine any costs saving.”

He said current technology was not good at dealing with “outliers” or unusual cases, which may not be a problem in disclosure or litigation funding but the concept of justice involved “getting the right result even in highly unusual cases, and those cases are often important in the development of law”. Mr Ciumei said the judicial process could be just as important as the outcome, because “People often need to be listened to whatever the ultimate outcome of their dispute. The knowledge that they have been given a fair hearing is a key part of accepting an adverse decision.”

Academic View

Computer scientists at University College London (UCL) have developed software and used it to accurately predict the result in hundreds of real life cases. The UCL AI “judge” has reached the same verdicts as judges at the European court of human rights in almost four in five cases involving torture, degrading treatment and privacy. The algorithm examined English language data sets for 584 cases relating to torture and degrading treatment, fair trials and privacy. In each case, the software analysed the information and made its own judicial decision. In 79% of those assessed, the AI verdict was the same as the one delivered by the court.

Dr Nikolaos Aletras, the lead researcher from UCL’s department of computer science, said “We don’t see AI replacing judges or lawyers, but we think they’d find it useful for rapidly identifying patterns in cases that lead to certain outcomes.” He added, “It could also be a valuable tool for highlighting which cases are most likely to be violations of the European convention on human rights.” An equal number of “violation” and “non-violation” cases were chosen for the study.

In the course of developing the programme the team found that judgments of the European court of human rights depends more on non-legal facts than purely legal arguments, suggesting that the court’s judges are more legal theory “realists” than “formalists”. The same is true of other high level courts, such as the US supreme court, according to previous studies. The most reliable factors for predicting European court of human rights decisions were found to be the language used as well as the topics and circumstances mentioned in the case texts.

The study corroborated the findings of other research on the factors that influenced the judgments of high level courts. Dr Vasileios Lampos, a UCL computer scientist, added “Previous studies have predicted outcomes based on the nature of the crime or the policy position of each judge, so this is the first time judgments have been predicted using analysis of text prepared by the court.” The findings by Aletras and his colleagues were published in the journal PeerJ Computer Science.

RLJ View

The terms “Robot Judge” or “Judgebot” are arguably not the best ways to frame the discussion of AI use in courts. The reality is that complex cases deal with extensive witness testimony and evidence, and also often lead to innovative legal interpretations and legal change. The application of AI in courts and in the work of judges will continue unabated, but we should not let our imaginations run away with us. There are certainly some ways AI will be of immediate and practical help:

  • In administration to streamline the court’s caseload. 
  • Giving insights into the range of valuable data coming before the judiciary. 
  • Supporting judges in organizing their material, finding jurisprudence, and assessing arguments. 
  • Assessing specific issues, such as the risk of recidivism in criminal cases.

Ultimately, however, these are ways AI acts as a support system, which still leaves the decision to humans. That said, we can see how such AI can have a big impact on the management and thereby influence the outcome of cases. There are borderline areas where, for instance, the AI provides a pattern in a large number of cases and judges may be wary of overriding such data in the specific case in front of them.

This issue came up in a US recent case where a computer program used for bail and sentencing decisions was labeled biased against blacks, resulting in a debate on bias which is going to be one of the defining policy issues as deployment of AI increases. In the Loomis case, AI was used to evaluate individual defendants. The algorithm used was created and built into a system called “Correctional Offender Management Profiling for Alternative Sanctions” (Compas), by a company called Northpointe Inc. The algorithm indicated Mr. Loomis had “a high risk of violence, high risk of recidivism, [and] high pretrial risk.” This influenced the six-year sentence given, though the sentencing judges were advised to take note of the algorithm's limitations.

Recently, U.S. Chief Justice John Roberts was asked “Can you foresee a day when smart machines, driven with artificial intelligences, will assist with courtroom fact-finding or, more controversially even, judicial decision-making?” He answered, “It's a day that's here and it's putting a significant strain on how the judiciary goes about doing things.” There is perhaps some urgency required to address this strain and how issues are to be answered.

 


related topics