By David Palmer, Senior Associate, Fieldfisher
The Commission invited industry, research institutes and public authorities to test the detailed assessment list drafted by the High-Level Expert Group, which complements the guidelines. The latest plans are a deliverable under the AI strategy of April 2018, which aims at increasing public and private investments to at least €20 billion annually over the next decade, making more data available, fostering talent and ensuring trust. In this article, we discuss the aims of the AI Guidelines and some of the legal issues for employers and customer-facing businesses which use AI.
Code to joy
The underlying theme of the AI Guidelines is that AI will be a force for good if it is trustworthy. According to the AI Guidelines, trustworthy AI should be lawful, ethical and robust. The AI Guidelines proposes seven key requirements to assess trustworthiness:
- Humans should have agency and oversight for the actions of AI.
- AI should be technically robust and safe.
- Private data should be used legitimately and governed responsibly.
- Decisions made by AI should be transparent and traceable.
- Bias should be eliminated from AI so as to avoid exacerbation of prejudice and discrimination.
- AI should be sustainable and environmentally friendly and for the benefit all living beings.
- Mechanisms should be put in place to ensure accountability for AI systems and their outcomes and so that AI systems should be auditable.
The heart of the machine
Why does the EU think AI might not be trustworthy? AI is learning from data created and compiled as a result of the cumulative actions of humans. Our many biases (conscious and unconscious) are hard-baked into the data that forms AI's curriculum. Add to that the risk that AI might be used intentionally by the powerful among us in ways that aren't so good for the vulnerable among us, and the EU definitely has a point.
To put it another way: AI has all the makings of an A-star student. But with a biased textbook and a potentially exploitative teacher, it might end up more Paranoid Android rather than OK Computer.
The AI in the sky
In the UK, it is already the case that any AI used by a business which places employees or customers with a certain protected characteristic (such as his/her sex, disability, age or race) at a disadvantage, could in certain circumstances be a breach of the Equality Act 2010. Depending on how that person's data has been used by the AI, there could be GDPR implications too. But perhaps the biggest risk of using AI is reputational. The media loves a story about AI being discriminatory. The female doctor who was unable to use her gym swipe card to access female changing rooms, because the software understood that only men could be doctors, is such an example.
The AI-mardillo – hard on the outside, soft on the inside
The ethics of AI remains an evolving space. There are already "hard" legal risks surrounding AI, and now the focus is shifting towards the "soft" issues at its core. These are each likely to grow over time.
The High-Level Expert Group is running a pilot to gather practical feedback on how its seven-point assessment list can be improved. The assessment list will be reviewed in early 2020 and the EU Commission may then propose next steps.
Regardless of Brexit, the EU Commission is now the thought-leader in this space. If it proposes that the EU legislates in this area, expect the UK Government to be highly minded to follow suit.