The European Union has long been interested in AI. Back in 2014, the EU first investigated the issue by producing guidelines on the regulation of robotics. Since then, it has continued to work on this area, with a strong focus on the regulation of AI.
In the last two years, there has been a wealth of new publications, guidelines and political declarations from various EU bodies on AI. These provide insight into the future of AI in Europe – including on how it will be regulated, what governments will promote, who will be liable for defects in AI and how safety standards will be enforced. These insights are of value both to legal practitioners operating in the emerging technologies sector and to organisations developing, using or considering the procurement of AI products.
While all interesting and important documents, the length of each document and the abundance of recent publications mean that it can be a time intensive process to get a handle of the overall picture these publications present. This article aims to provide an overview of all these reports by providing a brief summary of the recent key EU publications on AI.
Documents reviewed in this article
|
Title |
EU body |
Date of publication |
1. |
Declaration of cooperation on Artificial Intelligence |
EU Member States |
10.04.2018 |
2. |
Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions on Artificial Intelligence for Europe |
European Commission |
25.04.2018 |
3. |
Artificial Intelligence: A European Perspective |
European Commission - Joint Research Centre |
2018 |
4. |
A Definition of AI: Main Capabilities and Disciplines |
European Commission - High-Level Expert Group on Artificial Intelligence |
08.04.2019 |
5. |
Ethics Guidelines for Trustworthy AI |
European Commission - High-Level Expert Group on Artificial Intelligence |
08.04.2019 |
6. |
Building Trust in Human Centric Artificial Intelligence, Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions |
European Commission |
08.04.2019 |
7. |
Policy and Investment Recommendations for Trustworthy AI |
European Commission - High-Level Expert Group on Artificial Intelligence |
26.06.2019 |
8. |
Liability for Artificial Intelligence and Other Emerging Digital Technologies |
European Commission (New Technologies Formation) |
27.11.2019 |
9. |
Report on the safety and liability implications of AI, the Internet of Things and robotics |
European Commission |
19.02.2020 |
10. |
White Paper On Artificial Intelligence - A European Approach To Excellence And Trust |
European Commission |
19.02.2020 |
1. Declaration of cooperation on Artificial Intelligence (AI) - 10.04.2018
Twenty-five European countries signed the Declaration of cooperation on AI. The purpose of this relatively concise declaration is to bring together the national AI initiatives of each signatory. One of its aims is to assemble a unified European approach to the most important issues relating to AI. These include:
- EU competitiveness: ensuring Europe's competitiveness in the research and deployment of AI through boosting Europe's technology and industrial AI capacity;
- Education and upskilling for citizens: addressing socio-economic challenges created by the transformation of the labour markets; and
- Transparency: ensuring there is an adequate legal and ethical framework upon which to handle the various social, economic and ethical questions which arise as a result of the adoption of AI solutions.
The declaration sets out high level aims for the use of AI in the EU. It intends to bring greater alignment to the approaches being developed and to the adoption of AI solutions throughout the signatory states. In particular, it aims to address the challenges which arise from the emergence of AI's transformation of the labour market, such as widespread modernisation of Europe's education and training systems on AI, efforts to upskill and/or reskill European citizens to improve AI and data literacy, and promote an environment of trust and accountability around the development and use of AI. This declaration forms the basis of many of the documents discussed in this article
2. Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions on Artificial Intelligence for Europe - 25.04.2018
This communication follows on from the signing of the Declaration of cooperation on Artificial Intelligence on 10 April 2018 (item 1 above). It sets out a European initiative on AI to boost the EU's technological and industrial capacity through the uptake of AI across the economy by both the private and public sectors. Furthermore, the communication looks to address the potential socio-economic changes brought about by AI by:
- encouraging the modernisation of education and training systems;
- nurturing home talent;
- supporting labour market transitions;
- adapting social protection systems; and
- ensuring that an appropriate ethical and legal framework based on the EU's values and in line with the Charter of Fundamental Rights is in place.
The communication reports that the EU currently lags behind the US and China in terms of private investments into AI. In 2016, EU investment totalled €2.4-3.2bn, compared to €6.5-9.7bn in Asia and €12.1-18.6bn in North America. Accordingly, creating a fertile environment for investment is seen as crucial in order to preserve and build upon EU AI assets. The report also stresses that it is important to ensure that AI solutions are adopted throughout the European economy and across different sectors to ensure the EU remains competitive. In 2017, it was found that only 25% of large EU enterprises and 10% of EU SMEs were using big data analytics.
The report concludes that the EU has the main ingredients to become a leader in AI. In particular, it has a strong scientific and industrial base to build upon, with leading research labs and universities, recognised leadership in robotics, and innovative start-ups. The EU also has a comprehensive legal framework which protects consumers while promoting innovation and the EU is making progress in the creation of a Digital Single Market.
3. Artificial Intelligence: A European Perspective - 2018
This report was published in 2018 by the Joint Research Centre (JRC), which is the European Commission’s science and knowledge service. The objective of the report is to provide a balanced assessment of the opportunities and challenges presented by AI from a European perspective, and to support the development of European action in the global AI context.
The report highlights that the global competition in AI is largely between the US and China. The US is on top as things stand but China is in line to be the world leader for AI development by 2030. For the EU, the challenge is not so much to contest and become world leader itself, but rather to develop and protect a well-defined and robust ethical framework as a key characteristic of its AI development. The JRC concludes that Europe is well placed to establish a distinctive form of AI that is ethically robust and which protects the rights of individuals, firms and society at large.
The JRC emphasises the need to share data between the key stakeholders in AI development, including between public and private sector bodies and the public. The areas in which collaboration is marked as a priority include:
- Partnerships: increasing efforts to join research initiatives and create partnerships in Europe;
- Interoperable datasets: making high quality and trusted data repositories available to a broad range of users;
- Support: improving accessibility to know-how and testing facilities, such as smart-hospitals and precision-farming solutions;
- Digital innovation hubs: increasing the uptake of AI solutions; and
- Workforce: skilling and upskilling the home-grown workforce, but also to take action to attract and retain non-EU talent.
4. A Definition of AI: Main Capabilities and Disciplines - 08.04.2019
Defining AI is a controversial activity due to the broad nature of AI techniques and capabilities. This is a challenge for the future, as regulating AI will require some type of definition to be agreed on in order to have certainty over what is being regulated.
Here the High-Level Expert Group on Artificial Intelligence (AI HLEG), an expert advisory group to the European Commission, gives their proposed definition of AI, using the definition proposed by the European Commission (discussed in section 3) as a starting point. Their definition assumes that any AI system has three common components:
- perception;
- reasoning or decision making; and
- actuation
In order to make decisions, AI systems must have some way of learning (which can be achieved using techniques such as machine learning, neural networks, deep learning, decision trees etc) so that it can learn how to solve problems that cannot be precisely specified, or whose solution method cannot be described by fixed rules.
5. Ethics Guidelines for Trustworthy Artificial Intelligence - 08.04.2019
In this document, the AI High-Level Expert Group (HLEG) proposes a set of guidelines to ensure that the use of AI is "trustworthy". The guidelines flag that user trust is an essential component in the deployment of a new technology. Without trust, the economic and potential societal benefits of AI cannot be realised as users will not adopt it.
The AI HLEG sets out three pillars of trustworthiness, with explanations on how these can be achieved. The guidelines explain that AI should be:
- lawful,
- ethical; and
- robust, both from a technical and social perspective.
The first point is not addressed fully – the AI HLEG merely notes that existing laws, such as consumer safety and data protections laws, should be complied with but avoids a discussion on what further laws would be required to regulate AI.
The AI HLEG is more expansive on the second and third pillars. It recommends that ethical AI can be achieved when four key criteria are used:
- respect for human autonomy;
- prevention of harm;
- fairness; and
- explicability.
The guidelines acknowledge that such principles may conflict (for example, how can the prevention of harm be reconciled with human autonomy in autonomous vehicles with manual override?).
On robustness, the guidelines recommend policy measures such as transparency and stakeholder participation, as well as a suite of crucial technical measures, including:
- having resilience to cybersecurity attack;
- putting backup safety measures in place – such as having to ask for confirmation from a human operator if things go wrong;
- setting accuracy thresholds at a high level; and
- ensuring that all results obtained from AI systems are reproducible and reliable.
Though the guidelines lack specific detail on how AI developers can comply with these suggestions in practice, they do provide a useful first analysis of how AI can be regulated to ensure that it benefits, and is seen to benefit, society.
6. Building Trust in Human Centric Artificial Intelligence, Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions - 08.04.2019
This document is based on the advice the European Commission received from the AI HLEG. The European Commission intends to put into action many of AI HLEG's recommendations and to begin a "piloting phase involving stakeholders on the widest scale" to assess how to implement ethical guidelines for the development and use of AI.
The document summarises the AI HLEG documents referenced above, and for this reason alone it is a useful resource. It then goes on to set out its plans for reaching a consensus on the key requirements for AI systems, which the European Commission touts as an important first milestone towards establishing guidelines for ethical AI.
The European Commission intends to do this with a two-pronged approach:
- Conducting a pilot to understand how AI developers and users respond to the guidelines. All stakeholders will be invited during this stage to test the proposals and provide feedback on how to improve the guidelines; and
- Running in parallel to the first prong, a project to raise awareness and an understanding of the guidelines and their aims. The European Commission plans on doing this by organising various outreach activities, giving AI HLEG representatives the opportunity to present the initial guidelines to relevant stakeholders.
The document further discusses the European Commission's global ambitions. The EU has been in discussion with governments internationally, such as Canada, Singapore and Japan, to build an international consensus on ethical and trustworthy AI.
7. Policy and Investment Recommendations for Trustworthy AI - High-Level Expert Group on Artificial Intelligence - 26.06.2019
These recommendations build on the AI HLEG's guidelines discussed above. While the guidelines discuss the principles around trustworthiness, these recommendations set out specific steps that governments can take to further the adoption and acceptance of AI by (1) using trustworthy AI to build a positive impact in Europe; and (2) by leveraging Europe's enablers for trustworthy AI.
'Europe's enablers' for trustworthy AI include:
- Legally compliant and ethical data management and sharing initiatives;
- AI-specific cybersecurity infrastructures;
- The re-development of education systems to reflect emerging technologies;
- Reskilling and upskilling the current workforce;
- Adapting regulations and laws to ensure adequate protection from adverse impacts; and
- Governance mechanisms for single market AI trustworthiness.
While the recommendations are made to the EU and to Member State governments, the content is useful to anyone interested in the direction of travel for AI investment and regulation. For example, the document discourages the use of surveillance (including customer surveillance in a commercial context) and recommends investment into AI solutions that address sustainability challenges. The AI HLEG also recommends introducing a duty of care on developers of consumer-oriented AI systems to ensure that these can be used by all intended users, and to ensure that parts of society do not get left out by the use of AI. The report also recommends increasing investment into private sector AI development for companies of all sizes and in all sectors.
8. Liability for Artificial Intelligence and Other Emerging Digital Technologies - 27.11.2019
This report, published in November 2019, was written by the Expert Group on Liability and New Technologies – New Technologies Formation, an independent group set up by the European Commission. The authors carried out an assessment of the existing liability regimes in the wake of emerging digital technologies, such as AI, IoT and distributed ledger technologies.
They found that the liability regimes in force in Member States do currently ensure at least a basic level of protection for victims whose damage is caused by the operation of such new technologies. There are issues, however, with the current mechanisms for compensation for claimants and the allocation of liability, which may result in inefficient or unfair outcomes for victims. This is due to the specific characteristics of these technologies and their applications, including inherent complexity, modification through updates or self-learning during operation, limited predictability and vulnerability to cybersecurity threats.
To rectify these shortcomings, the authors recommended making a number of adjustments to national and EU level liability regimes, including:
- Strict liability: there should be strict liability for operators of permitted technologies that carry an increased risk of harm to others.
- Duties on users: for operators of technologies that do not pose an increased risk of harm to others there should be a requirement to abide by certain duties – including having to choose the right system for the right task and skills, and to monitor and maintain the chosen system. There would be liability for operators who breach those duties if found to be at fault.
- Accountability: persons using such technologies that have a degree of autonomy should not be less accountable for ensuing harm than if said harm had been caused by a human auxiliary.
- Manufacturer's liability: There should be liability for manufacturers of products or digital content incorporating emerging digital technology for damage caused by defects in their products, even if the defect was caused by updates made to the product after it had been placed on the market (as long as the manufacturer was still in control of those updates). A development risk defence should not apply for producers.
- Insurance: requiring compulsory liability insurance to give victims better access to compensation and to protect potential tortfeasors against the risk of liability in situations exposing third parties to an increased risk of harm.
- Appropriate standards of proof: ensuring that victims are entitled to so-called "facilitation of proof" in circumstances where an emerging technology has caused harm but where it is difficult to prove liability because of, for example, the particularly complex nature of the technology.
- Mandatory logging features: there should be a duty on manufacturers to equip technology with a way of recording data about its operation where such data is essential in establishing whether a risk in a technology has materialised. Furthermore, the absence of such data or a failure to provide a victim with reasonable access to the logged data should not prevent a victim from being able to prove that the manufacturer was at fault but for the missing data.
- Data loss as damage: the destruction of the victim’s data should be regarded as damage, for which compensation is available (subject to certain conditions being satisfied).
- No autonomous legal personality: it should not be necessary to give devices or autonomous systems a legal personality, as the harm these may cause can and should be attributable to existing persons or bodies.
9. Report on the safety and liability implications of AI, the Internet of Things and robotics - 19.02.2020
This report was published by the European Commission on 19 February 2020. It addresses the need to ensure that AI, IoT and robotics all have clear and predictable legal frameworks within which to be developed. These emerging technologies raise new challenges in terms of product safety and liability, including connectivity, autonomy, data dependency, opacity, complexity of products and systems, software updates and more complex safety management and value chains.
The report recognises that the nature of AI could make it difficult under the existing liability framework to offer compensation to victims. In particular, under the current rules the allocation of cost when damage occurs may be unfair or inefficient. The report considers various adjustments to the Product Liability Directive and national liability regimes to rectify this, and to address potential uncertainties in the existing framework.
Ideas for such adjustments include:
- Scope of liability: adjusting the scope of the definition of a "product" under the Product Liability Directive to give greater clarity over and to better reflect the complexity of emerging technologies;
- Burden of proof: mitigating the complexity of AI solutions by alleviating or reversing the burden of proof required by national liability rules for damage caused by the operations of AI applications in favour of victims;
- Liability for modifications: adjusting the concept of 'putting into circulation' under the Product Safety Liability Directive, so that account is taken of products that can be changed or altered after they have been released for sale to help clarify who is liable for alterations made to a product;
- Insurance: coupling strict liability with an obligation to have available insurance, akin to the system put in place by the Motor Insurance Directive in respect of motor vehicles; and
- Causation: adapting the burden of proof in respect of causation and fault so as to avoid a situation whereby a potentially liable party has not logged relevant data for assessing fault or is not willing to share such data with the victim.
The end goal of this exercise is to help create trust in these emerging digital technologies so that users benefit from a reliable liability framework. This would also have the effect of improving confidence for investors.
10. On Artificial Intelligence - A European approach to excellence and trust - 19.02.2020
When Ursula Von der Leyen first took the European Commission presidency in December 2019, she promised a legislative proposal on AI within the first 100 days. While this deadline wasn't met, the European Commission has published a white paper to explore the various policy options. A legislative proposal is still due to follow and is currently expected by the end of 2020.
In part this document builds on and repeats the recommendations made by the AI HLEG (as discussed in section 8), but then goes on to set out possible legal changes.
Firstly, the European Commission proposes creating a legal definition of AI, which would build on the work already done by the European Commission and the AI HLEG. It then goes on to address the types of legal changes it may recommend including:
- Making updates to existing consumer protections laws, to ensure that they stay relevant and continue to apply to AI consumer product and services; and
- Proposing new laws to regulate "high-risk" AI. Anything not classed as high-risk would be subject to laws which already exist.
The paper lacks detail about what "high-risk" AI could mean, other than to say that the European Commission will assess this both by sector (e.g. healthcare) and by purpose. It considers AI systems that can affect the rights of an individual or company legally or in a similarly significant way, or that pose risk of injury, death or damage, are the types of applications that would be considered high risk. Three definite examples are given:
- use of AI applications for recruitment processes;
- for biometric identification; and
- for surveillance.
In conclusion, while the EU lags behind the US and China in AI technologies, it is intent on being at the forefront of AI in terms of scientific research, commercial development and regulation that protects users.
The documents summarised in this article show how the EU's approach to AI – and the regulation of AI – is gradually evolving, and how in broad terms it has settled on a three pronged approach of investment, regulation and public education.
Anyone involved in the development of AI technologies or investing in AI should keep a close eye on further developments, so as to have a head start getting to grips with a new legal and economic landscape for AI as it is gradually put into place by the EU and its Member States.
Jonathan Edwards and Clara Clark Nevola are associates at Bird & Bird