16 Jan 2019

Lawyers need to collaborate with tech in AI

Inaugural lecture urges lawyers to collaborate with computer scientists on using AI in law.

shutterstock
alphaspirit / Shutterstock.com


Research Professor Mireille Hildebrandt used her inaugural lecture at The Allens Hub for Technology, Law & Innovation in Sydney to urge lawyers to collaborate more closely with computer scientists on using artificial intelligence (AI) in law. She framed the issue by stating, “We need constructive distrust, rather than naïve trust in ‘legal tech’.” Professor Hildebrandt, a research professor on interfacing law and technology at Vrije Universiteit’s Faculty of Law and Criminology, and was discussing the impact of AI on law in her lecture.

To explain the issue she drew on the example of a 2013 US case, where a man in Wisconsin was arrested for attempting to flee a police officer and driving a car used in a recent shooting. While none of his crimes mandated prison time, the judge in the case said the man had a high risk of recidivism and sentenced him to six years prison. The judge had considered a report from a controversial computer program called COMPAS, a risk assessment tool developed by a private company.

Professor Hildebrandt explained, “This certainly involves reconsidering the use of potentially skewed discriminatory patterns, as with the COMPAS software that informs courts in the US when taking decisions on parole or sentencing.” She said researchers say there is evidence the COMPAS program is discriminating against black offenders, “Though researchers agree that based on the data they are more likely to commit future crimes than white offenders, this caused a bug in the outcome regarding black offenders that do not recidivise.” She added, “They are attributed a higher reoffending rate than white offenders that never recidivise.”

This raises a broader issue, she explained “COMPAS has given rise to a new kind of discussion about bias in sentencing, and once lawyers begin to engage in informed discussion about ‘fairness’ in ‘legal tech’ they may actually inspire more precise understandings of how fairness can be improved in the broader context of legal decision-making,” Professor Hildebrandt says.

She says AI tools “cannot be ‘made’ ethical or responsible by tweaking their code a bit”. Instead, she argues, “we should focus on training lawyers in understanding the assumptions of ‘AI’, especially its dependence on mathematical mappings of legal decision-making, as this has all kinds of implications that are easily overlooked.” To achieve this, lawyers should develop ‘a new hermeneutics’, or a new art of interpretation, that includes a better understanding of what data-driven regulation or predictive technologies can and can’t do. She said, “This may, for instance, mean that lawyers sit down with data scientists to define ‘fairness’ in computational terms, to avoid discriminatory application of technical decision-support.”

She proposes lawyers should ask three questions before introducing new technologies that will redefine their profession as well as the legal protection they offer: What problem does this technology solve? What problems are not solved? and What problems does it create? Professor Hildebrandt elaborated, “This requires research, domain expertise and talking to the people who may be affected: regulators, lawyers, but also and especially the ‘users’ of the legal system: citizens, consumers, suspects and defendants, the industry.”

She says computer-based predictions of legal judgments could help lawyers and those in need of legal advice decide whether or not to bring a case to court.  AI in the form of ‘argumentation mining’ could also help legal clerks quickly infer relevant case law, statutes and even doctrine with regard to a specific case, while identifying potentially successful lines of argumentation. “A concern could be that we engage ‘distant reading’ (reading texts via software) before being well versed in ‘close reading’, losing important lawyerly skills that define the mind of a good lawyer,” she says.

Another concern she raises, “is that legislatures may want to anticipate the algorithmic implementation of their statutes, writing them in a way easily translated into computer code,” which “may render such statutes less flexible, and thereby both over-inclusive and under-inclusive, or simply unfair and unreasonable.” AI can improve compliance by possibly pre-empting people’s behaviour and reconfiguring their ‘choice architecture’, so that they are nudged or forced into compliance. She suggests, “Sometimes that may be a good thing, as long as this is a decision by a democratic legislature, and as long as such choice architectures are sufficiently transparent and contestable.” 

Crime-mapping is another example of involving AI in the administration of justice through policing, but “this may displace the allocation of policing efforts to what the ‘tech’ believes to be the correct focus.” She explained, “Crime-mapping depends on data, which may be skewed, and the most relevant data may actually be missing,” and such blind trust “may undermine effective policing (as they may remain stuck in what the data allows them to see), it may also demotivate street-level policing, as officers may be forced to always check the databases and the algorithms instead of training their own intuition.”

Professor Hildebrandt says AI may contribute to proper compliance of data protection if done well, “Or it may undermine the objectives of the law, by turning it into a set of check-boxes, where the real impact is circumvented by way of cleverly designed pseudo-compliance.”

 


 

Meirelle Hildebrandt

 

 


related topics