11 May 2020

A little learning can be a dangerous thing: regulation of AI and machine learning across the UK and Europe

The European Commission and the UK Information Commissioner’s Office unveiled white papers on AI regulation on the same day. Jo Joyce and Jean-David Behlow ask whether the EU and UK can become world leaders in AI regulation

Alexandre Debiève on Unsplash

AI and machine learning are firmly at the top of the regulatory agenda right now.

In a case of bureaucratic minds thinking alike, last month the European Commission (EC) published its much-awaited White Paper on Artificial Intelligence (AI) and its wider A European Strategy for Data, and on the same day the UK Information Commissioner’s Office (ICO) produced a guidance document on an AI auditing framework.

Both the EC and ICO are putting their analysis out for consultation and calling for legal and technical views. Whether either has the clout to impose change at the global level in the face of competing economic superpowers remains to be seen.

Neither the UK nor the EU has been ahead of the curve when it comes to efforts to regulate AI. The Canadian federal government launched its five year Pan-Canadian Artificial Intelligence Strategy back in 2017 and has been followed by dozens of countries, from Australia to Uruguay, since then.

A number of international strategies are also in place, including UN initiatives and a series of shared commitments agreed by the G7.

The UK and Europe are, however, ahead of the US. Although there are some moves to introduce regulation of AI at a federal level in the US, the White House has indicated an unwillingness to significantly increase the regulatory burden and, given the pace of technological development, it may not be easy for any government or international body to impose an ethical framework on this already thriving sector.

Last week, in a move surprising to some, the Vatican became the latest institution to announce that it will issue AI ethical guidelines. In an early statement, the Catholic Church called for AI to be ‘humanity centric’.

Privacy by design and default

At an impressive 105 pages, the ICO’s draft audit framework guidance is about as comprehensive as possible, without being indigestible. The guidance is designed to provide a ‘solid methodology’ to help developers and risk managers assess and manage the risks that AI can pose to rights and freedoms, in the context of privacy.

The GDPR (incorporated into post-Brexit UK law by the UK Data Protection Act 2018) emphasises the concept of privacy by design and default: building privacy into the core of new technologies, services and products, instead of it being an afterthought.

Privacy by design and default is both crucial and challenging in the context of AI and machine learning, particularly when the desired outcome is a machine which makes decisions about the use and creation of personal data.

Accountability

The guidance reiterates that businesses are responsible for ensuring the compliance of their own systems. This includes assessing and mitigating risks and documenting and demonstrating AI decisions made about personal data.

It also places great importance on the involvement of senior management in the risk management process of AI and data-driven decisions. Interestingly, the guidance states that this cannot be delegated to data scientists or engineering teams.

The draft follows the existing ICO guidance on data protection impact assessments (DPIAs), and confirms that businesses which use AI systems that process personal data are legally required to carry out a DPIA to assess and mitigate potential risks for individuals before beginning the processing.

While businesses acting as processors are not legally required to conduct a DPIA, some already invest time in assisting controllers or creating their own AI DPIAs.

This becomes more pressing if they intend to re-use the data for their own purposes as a controller.

Transparency

The complexity of the AI ecosystem makes complying with the GDPR’s lawfulness and transparency requirements challenging. The lawful basis for processing personal data is a recurring issue for those processing personal data in an AI context.

The draft guidance provides useful examples of lawful bases that can be used in certain scenarios.

Consent may be an appropriate lawful basis for the use of an individual’s data during deployment of an AI system eg for purposes such as personalising the service or making a prediction or recommendation) provided that consent is validly collected.

The guidance also clarifies that businesses are unlikely to be able to rely on the ‘performance of the contract’ ground for processing personal data for purposes such as ‘service improvement’ of an AI system.

The examples given are useful, but controllers are still required to make a thorough assessment when choosing which lawful basis to apply before they begin processing personal data.

The ICO notes that a lawful basis can be changed provided there is a good reason notified of it, although this will be difficult to implement in practice, especially when moving from legitimate interests to consent.

Controllers and processors in the AI lifecycle

In the ICO’s view, an organisation will act as a controller if it takes the ‘overarching decisions about what data and models it wants to use, the key model parameters, and the processes for evaluating, testing and updating those models’.

The draft guidance provides a series of scenarios setting out when businesses are likely to act as controllers (for example, if a developer is using personal data to train its models).

An interesting clarification is that an organisation that takes certain decisions about data and models may still act as a processor, rather than a controller, provided its decisions are not ‘overarching’. This means that making choices about measures to optimise learning algorithms and models (to minimise their consumption of computing resources) will not necessarily indicate the decision maker is a controller, unless it also has a ‘sufficient influence over the essential elements and purposes of the processing involved in the prediction’.

The guidance also clarifies that businesses processing personal data on behalf of their customers (which then re-use their customers’ data to improve their own AI models and algorithms) will be acting as data controllers.

Crucially, many organisations using AI or machine learning will have to rethink their status under the GDPR as controllers or processors and expressly reflect their roles, responsibilities and associated liabilities in their data processing agreements.

Trade-off analysis

The ICO stresses the importance of carrying out a ‘trade-off’ analysis between privacy and other factors as part of a company’s initial due diligence.

Failure to do this can lead to inadequate or inappropriate decisions which negatively impact data subjects.

The guidance makes an interesting distinction between data accuracy in a privacy context, and data accuracy in an AI context. While statistical accuracy will generally lead to more accurate personal data being generated, the guidance clarifies that AI applications do not need to be 100% statistically accurate to be acceptable in a privacy context.

This will give some comfort to developers that there is no automatic assumption that AI applications will be a high compliance risk from a data accuracy point of view.

Although the draft guidance deals strictly with data protection compliance for AI rather than with ethical concerns, it also addresses the issue of discrimination and bias.

Bias and discrimination

The ICO provides a list of controls and measures to implement and recommends clear risk management policies and examples of good practice.

Human oversight of the quality and sources of data used to train AI is considered essential to avoiding the potential discriminatory effects of a technology which is often developed using data drawn predominantly from white male test subjects.

This is a concern that has recently been raised in relation to facial recognition technology which, due to the data on which it has been fed, is far more likely to generate false positive results when identifying women or those with non-white skin tones and features.

The ICO’s focus on discrimination through biased sampling and data training is an example of its increasingly expansive approach to its remit.

We’ve already seen the ICO take this approach in the final version of the Age Appropriate Design Code, soon to be laid before parliament, which has ventured into child welfare concerns beyond pure privacy issues.

The EC’s data strategy and AI white paper: an ambitious plan for a competitive world

The EC’s approach in its new data strategy communication and AI white paper is to keep detail to a minimum while still adopting a bold and widereaching outlook.

The commission is showing a great deal of ambition in its A European Strategy for Data. Balancing what the EC itself identifies as Europe’s ‘strict’ privacy regime with a desire to use data to drive the European Green Deal, better health outcomes, and wider access to data for SMEs, might be considered challenging enough.

In addition, the EC is also setting out its stall in competition with other influential jurisdictions and looking to build the European data model as the global default approach.

In the AI white paper, the commission positions itself as the balanced and individual rights-focused alternative to the regulation-lite approach of the US, and the contrasting model of strict government surveillance and control in China.

The commission does not expressly make the point that the power of the US ‘Big Tech’ firms means that much of the world’s data is controlled out of California, but the subtext of the white paper shows this as a clear concern.

Although there might be fewer restrictions on data processing outside the EU, the level of investment and consistency of approach across EU borders that the commission envisages can be viewed as an attempt to ensure technological progress is incentivised through the quantity and quality of data available.

While this outward-looking and rigorous approach is very much in line with the EU’s focus on consumer protections in the Digital Single Market project and the creation of a modern privacy and data security regime, it is also an acknowledgement of the threat that the EU faces from other, less regulated or less rightsbased regimes which have a different ethical approach to the necessary constraints to data management that the EU model imposes.

Given the number of jurisdictions that have used the GDPR as the basis for new privacy legislation, it is possible that the commission’s attempt to make the European approach to data ubiquitous will pay off. However, the EU’s desire to be the ‘global hub for data’ is going to be difficult to reconcile with its intention to enhance regulation in the areas of privacy, AI ethics and cybersecurity.

The commission identifies possible enhancements to the GDPR around data portability and the AI white paper suggests that the use of biometrics (such as facial recognition technology), other than for necessary law enforcement, is likely to become much harder to justify.

The vision of data-sharing through nine common data spaces (focusing on key areas such as the Green Deal, health, and energy) is bold, but how this will work in practice remains unclear.

The consultation launched by the EC on the strategy is likely to receive a lot of interest in relation to funding, access to EU data spaces, and any reward for participating.

Neither the AI white paper nor the data strategy communication makes any reference to the UK. This is unsurprising as the UK was an EU member state throughout much of the drafting process and has stated a broad intention to remain aligned with EU data law after the transition period.

However, alignment is far from guaranteed and the approach taken by the UK, and any cooperation between it and the EU in this area, could be incredibly important to the success of the commission’s plans.

The ICO’s role in developing policy and guidance used across the EU has been crucial to the success of the GDPR.

If there is no UK involvement in the proposed data and AI strategy, the commission may find it harder to secure engagement in the approach to data that it seeks to foster.

With so many competing approaches to the governance of AI, there has never been more uncertainty for AI developers around the world, but nor has there ever been a better opportunity to contribute to the political and ethical debate.

Jo Joyce and Jean-David Behlow are senior associates in Taylor Wessing’s commercial technology and data group 

 


related topics