AI in the Legal System
Peking University Law School is partnering with Gridsum, a cloud-based analytics platform, to launch a new research centre to determine the potential uses of AI in law in China. The influence of technology on the practice of law is now a key concern to legal educators and lawyers. In China it is coming under the spotlight due to the country’s huge technological push and the widespread adoption of AI in potentially sensitive areas.
In the Peking University Legal AI Lab and Research Institute, Gridsum provides a research foundation by using its experience in using Faxin Wei Su, an AI-powered litigation tool developed for the company. Both have collaborated previously – in 2014, researchers at Peking University Law School used Gridsum’s media dissection tool to analyse news reports and publish a report about trends in the media that year.
The partnership is part of the plan put in place by China’s State Council. The government approved the “Next Generation of Artificial Intelligence Development Plan” in July 2017 to increase investment in the research and development of AI across the country and its industries. China’s Ministry of Industry an Information Technology launched a three year plan to open up and develop a full regulatory framework to allow for more AI innovation and promotion.
The country has the world’s largest CCTV monitoring system, with over 170 million CCTV cameras installed across the country, with a further 400 million set to be in place by 2020. Along with the sheer numbers of watchful eyes, they are now being powered by AI and facial recognition technologies, meaning that it really is impossible for anyone to hide. In a demonstration of the system last year, a BBC reporter was located and apprehended after only seven minutes. An image of his face was flagged to authorities as a suspect, with the result that they were easily able to track him on the many cameras he walked past over just a small distance.
This technology is being used to assist the police force in their investigations and to help with overall security. However, it is also being adopted by private corporations, which are using it to monitor employees. The potential for misuse and abuse of this power is huge and it threatens to cross into ethical and legal boundaries. Authoritarian regimes could easily use increased AI-powered surveillance for monitoring political opposition or dissidents, even tracking down journalists to clamp down on information sharing.
Surveillance is also portable now in China, with a new technology developed by LLVision. Chinese police forces have started to don sunglasses with special lenses fitted with facial recognition software, giving them the ability to quickly identify individuals in a crowd. In comparison with traditional CCTV, the smart glasses are much more efficient, due to better picture quality, and the lack of a time lag inherent in a CCTV system – even if a screen is being monitored live, it takes valuable time to alert the authorities if a person of interest is spotted. Critics argue that this new ubiquity of surveillance infringes on citizens’ privacy, and that racial profiling could become one of many unintended consequences of arming law enforcement agents with such devices.
Facial recognition software works by comparing the images the wearer or the camera is seeing to the images of faces in a database loaded with faces of suspects to see if there is a match. The protection of the database is therefore a huge priority, as well as ensuring that citizens’ personal data it is not misused or used without consent. There are no independent courts in China and the number of privacy protections are very few, raising further concerns for critics of the surveillance programme.
In June 2017, China passed a new cybersecurity law – at such a rapid pace that many were not familiar with its contents when it came into effect. Within a few months, the Cyberspace Administration of China (CAC) investigated three of the most prominent social media platforms in the country, WeChat, Weibo, and Baidu Teiba on grounds of violations of rules relating to terror, rumours, and pornography. Many were left concerned by this move over the perceived tightening control of the internet in the country.
The law shares some similarities to the EU’s upcoming General Data Protection Regulation (GDPR), including details on best practices for cyber safety, transfer restrictions and so on, but it is also being used as another tool to police the internet. It seeks to move beyond cybersecurity into cybercrime behaviours such as online terrorism, hate crimes, spreading fake news, and it goes even further into the realm of regulation of behaviour that is deemed to be damaging national security.
A prominent example is the hotel Marriott – which was kicked off Chinese cyberspace in January 2018 because it listed Hong Kong, Taiwan, Tibet, and Macau as separate countries on a customer questionnaire, while China maintains sovereignty over all these regions. Marriott apologised for the incident, but the Shanghai Cyberspace Administration still opened an investigation over whether the hotel violated the cybersecurity law. The ambiguity over which entities can be charged and exactly what types of behaviours will be seen as a threat to national security means that the law can potentially be used in a wider jurisdiction than stated, giving the government more control over China’s cyberspace. While Chinese companies are used to high levels of governmental control and how they draw up these laws, foreign businesses may have to look to the Marriott case as a warning and take great care in their dealings in China online.