22 Feb 2016

Health and safety in the age of killer robots: The criminal liability of robots in a workplace environment

Can robots be accused of criminality? Brian O'Neill and Rob Dacre of 2 Hare Court examine issues which are starting to arise in the workplace.

By Rob Dacre Brian O'Neill



It may seem like the stuff of science fiction, but the criminal liability of robots in the workplace may become legal fact in the not too distant future. Boston Consulting Group has predicted that by 2025 up to a quarter of jobs will be replaced by either ‘smart software’ or robots; researchers from Oxford University suggest that 35% of jobs in the UK are at risk of ‘automation’ by 2035. Accidents in the workplace, and the legal fall-out that results, will increasingly involve questions about how liability is shared out if and when those accidents involve robots.

In other jurisdictions such questions are already being posed and not in a hypothetical way. In February this year a worker in a Volkswagen car factory in Germany was killed when a robot pinned him against a wall. The robot was a mechanical arm that moves car parts into place. It was capable of functioning entirely without a human operator. A police investigation is on-going, and state prosecutors have yet to confirm whether (in the words of a press officer) ‘anybody was at fault’.

The press officer’s statement may well encapsulate the key issue for courts dealing with workplace accidents where robots have been involved. If a robot is capable of functioning entirely without human operators, does this insulate companies from liability for robots that cause accidents? And where is liability likely to go if companies can rely on a new defence based on a robot’s ability to function on its own. This article looks at two areas of English law where these questions about robot liability may be posed, and resolved, in the near future: corporate manslaughter and health and safety regulations surrounding the use of machinery.

In England and Wales, fatal workplace accidents are dealt with in the criminal courts under the Corporate Manslaughter and Corporate Homicide Act 2007. Section 1 reads as follows:

‘(1) An organisation to which this section applies is guilty of an offence if the way in which its activities are managed or organised –

(a) causes a person’s death, and

(b) amounts to a gross breach of a relevant duty of care owed by the organisation to the deceased.

Section 1(3) restricts the liability of corporations by prescribing that only if the way in which the organisation’s activities are ’managed or organised by its senior management is a substantial element in the breach referred to in subsection (1)’. In order, therefore, for a company to be criminally liable for fatal accidents, the courts must find first that an accident was caused by the way in which the company has been, managed and organised; second that the accident resulted from a ‘gross breach’ of a relevant duty of care; and third that organisation or management by senior management is a ‘substantial element’ in the breach arising.

Had the German worker been killed in an English factory, problems may have arisen in relation to all three limbs of s.1. First, proving causation where a robot is capable of functioning entirely without human operators will be difficult. If, in essence, a robot has ‘decided’ to behave in a manner that causes an accident, it may not be possible to establish that senior management practices have been a cause of the breach, as opposed to the ‘decision’ made. Second, the courts may be reluctant to find a duty of care to prevent autonomous machines, which operate entirely independently of their human controllers, from behaving in a way that is (by its nature) unexpected. Third, even if the prosecution courts can establish limbs one and two, the ‘substantial element’ caveat will cause further problems. If it is difficult to establish causation at all where a machine is autonomous, it will be even more difficult to establish that mismanagement at a senior level was a ‘substantial element’ in causing the breach.

Fears about a robot lacuna are, however, probably misplaced. All of the above, of course, assumes that the robot in question is capable of entirely autonomous behaviour, or capable of making decisions, moving and performing its functions without any input, at any stage, from human controllers. It is likely that the robotic arm in the German factory, or its equivalent in an English factory, does not quite reach this level of sophistication: it was capable of independent operation, but in a way that was uniform and predictable. Plainly, where the use of the robotic arm gives rise to foreseeable risks of injury to employers, the courts will be quicker to accept that fatal accidents have been caused by mismanagement, and that corporations owed a relevant duty of care.

Even if a company can establish that a robot acted entirely of its own volition, and in an unpredictable way, that does not mean that there will be no causative link between injury and a breach of health and safety duties. Section 2 of the Health and Safety at Work Act 1974 places a duty on employers not just to provide safe equipment (or non-lethal robots), but also to provide appropriate training and supervision with that machinery. Employers must ensure so far as is reasonably practicable - ‘the provision of such information, instruction, training and supervision as is necessary to ensure, so far as is reasonably practicable, the health and safety at work of his employees’.

It will not be enough to demonstrate that the robot was acting on its own if there are other identifiable breaches (such as a failure properly to train employees) that were a cause of the accident. Some of the duties prescribed by European regulations are set out below, but it is not difficult to imagine cases where employers have breached any number of duties in relation to employee safety in interacting with robots, even if the robot is capable of making decisions independently. Moreover, as the number of accidents involving autonomous machines increases (as it no doubt will), so too will the foreseeability of risk, and in time, the number of regulations governing health and safety when using robots.

Health and safety regulations currently in force would similarly prevent companies from avoiding liability where an accident has been caused by a robot’s ‘decision’. The Provision and Use of Work Equipment Regulations 1998 (“the PUWER regulations”) provide best practice regulations for the use of machinery in workplace environments. The Health and Safety Executive also publishes an Approved Code of Practice to comply with the regulations which, whilst not given statutory authority, can be persuasive to courts in establishing whether duties have been breached. Under these regulations, it will be very difficult for companies to hide behind a robot’s ‘decision’ as the sole substantial cause of an accident. The PUWER regulations prescribe best practice in preventing employees having access to dangerous parts of machines, training in their use and in risk management, and proper maintenance. It seems highly unlikely that an employee will be caused injury by a robot without some failure to abide by regulations concerning their safe use.

Companies which use robots cannot hope to rely, therefore, on an infallible new robot defence in the very near future. It is still possible to imagine a scenario in which an entirely autonomous robot makes an entirely unforeseeable decision which causes an accident; and where no breaches of health and safety regulations can be identified. But that lacuna is very small indeed: at least for now, that legal and philosophical problem remains in the realms of science fiction.

related topics