While true AWS are not yet being developed, there is a pressing need to regulate them, because autonomous capabilities for technologies such as Unmanned Aerial Vehicles (UAVs) are being developed, with some of these functions, such as target selection, holding importance for military applications. When deployed, these autonomous drones will need to be regulated to protect against misuse and abuse, whether intentional or not. Creating strong regulations and policy directions now, at an early stage, is critical, according to a report by the group Drone Wars UK called Off The Leash.
Drones as AWS
Fully autonomous weapons do not currently exist, but already there are military systems around the world that have automated components or functions. These can include military armed drones having automated take-off and landing, and navigation capabilities. A significant feature of drones in particular is how improvements are often highly incremental, coming in the form of regular software updates, and they hold the power to “bolt-on” additional features, which makes them ideal for morphing into becoming AWS. At the moment there is always a human in control of a drone, but with these improvements coming so regularly as the drone industry continues to expand globally, it is clear that this is becoming less of a certainty with each passing day.
Certain technologies, such as Machine Learning and Cloud Computing – where an individual robot can learn from the experiences of all its fellow robots in a group, leading to a rapid increase in overall competence – are expected to be the key enablers for autonomous drones. The technological leap to lethal autonomous drones is not particularly large, it is much more a case of a leap of ethics and morality that needs to be crossed. The appetite from the military standpoint is also very much in effect, as a UAV will not suffer from fatigue, have better reaction times, and so on, in comparison to a human soldier. Advocates of AWS point to these advantages as likely to reduce harm during warfare as operations will be carried out with more efficiency.
Providing new capabilities to drones that are capable of carrying a lethal payload in an incremental manner means that we could have new AWS without realising it or having the rules in place to regulate them. Having a clear definition to mark the point at which a drone becomes an AWS is therefore important. The International Committee of the Red Cross (ICRC) defines AWS as: “Any weapon system with autonomy in its critical functions. That is, a weapon that can select (i.e. search for or detect, identify, track, select) and attack (i.e. Use force against, neutralise, damage or destroy) targets without human intervention.” A drone that is able to have autonomous capabilities in the relevant areas would therefore become an AWS according to this definition.
Automated v. autonomous
In the ICRC definition, it highlights autonomy in its critical functions – those relating to weapon use; this is in contrast to its non-critical functions. Most people can agree that there are many functions that it is acceptable or even agreeable to automate – take off, landing, navigation – where it is just improving the overall performance of the system, while also making it easier for the human operator to use. By this token, it is more useful to look at the features that are gaining autonomy rather than measuring autonomy of the device as a whole.
Automating non-critical functions would lead to a device that is considered automated – still under human control but able to perform many of its functions automatically. Automating critical functions would push a drone into becoming autonomous instead, which raises ethical, moral, and legal questions. These can include:
- What is the role of the weapon with autonomy?
- How much autonomy does the weapon have in this role?
- What are the relevant legal issues?
- Does the weapon have the capability to fulfil legal requirements?
The UK Ministry of Defence (MoD) makes the argument that distinguishing between autonomous and automated devices is important and has published relevant definitions to make sure that any of their systems can fulfil legal requirements.
Automated system
“In the unmanned aircraft context, an automated or automatic system is one that, in response to inputs from one or more sensors, is programmed to logical follow a predefined set of rules in order to provide an outcome. Knowing the set of rules under which it is operating means that its output is predictable.”
Remote and automated system
“A system comprising the platform, control, and sensor equipment, the supporting network, information processing system, and associated personnel where the platform may be operated remotely and/or have automated functionality.”
Autonomous system
“An autonomous system is capable of understanding higher-level intent and direction. From this understanding and its perception of its environment, such a system is able to take appropriate action to bring about a desired state. It is capable of deciding a course of action, from a number of alternatives, without depending on human oversight and control, although these may still be present. Although the overall activity of an autonomous unmanned aircraft will be predictable, individual actions may not be.”
The MoD definition of an autonomous system is much narrower than the ICRC, possibly so as to downplay fears over AWS development. By defining an autonomous system as being almost level with a human, it can claim that such systems are not in development. With such definitions in clear disagreement with each other, regulating this emerging industry is an uphill struggle, especially where there are currently no agreed international legal norms.
Risks
The short-term military applications of drones are likely to be low-risk and well-supervised, such as in logistics and supply chain. In the longer term, autonomy will become more sophisticated and technology more weaponised, while the amount of supervision will reduce – this will lead to higher levels of risks.
In the fields of artificial intelligence and robotics, unpredictability has proven to be an inherent feature. This is because, with complex self-learning systems, we do not necessarily understand a lot of what makes them work, their actions still relatively unexplainable. The decisions they take have evolved from their learning processes rather than being specifically programmed in by a human. Given the high-stakes environments AWS will be deployed in, they cannot afford to be unpredictable. Furthermore, minute programming errors can cascade into huge problems in a computer system, leading to total failure or actions that run entirely counter to its intended programming.
The critical function of target selection is an obvious area where unpredictability would cause chaos in the battlefield. Advocates of AWS will claim that advances in this area may lead to strikes being more surgical and causing less harm overall, but a tiny programming error could lead to it firing indiscriminately, or not differentiating between civilians and valid military targets, for example. Unconscious bias from the human engineers could also affect this critical function and lead to the wrong outcome in a conflict.
How a drone is able to determine a proportionate response in a high-pressure battlefield situation is unknown, and unlikely to be acceptable to a human overseer, as such decisions come with human years of experience and are difficult to program into a machine. Deployment of an unpredictable weapon system could lead to an escalation of a conflict because the wrong course of action was taken, one that a human would not have done. The speed at which AWS will be able to make these decisions is also a factor – without a human behind the controls to stop them in time.
There are also the more “mundane” risks of the vulnerability to outside cyber attacks, normal accidents in the line of duty, and intentional misuse or abuse. Development of autonomy is not quite the full issue – it’s how it is then used by humans.
UK's position
So far, the UK and US are the only countries to have any detailed policies on AWS. Both are narrow in scope, restricting the deployment of AWS without appropriate human control, but not their research and development. The UK’s position is set out in a Joint Doctrine Publication, “Unmanned Aircraft Systems” and a Joint Concept Note, “Human Machine Teaming”, both published by the MoD in September 2017 and May 2018 respectively. The Joint Doctrine Publication makes the case that the UK operates its military drones in compliance with international law.
It also states that the UK’s position is that it opposes the development of AWS and has no intentions of developing such systems. However, as discussed previously, the UK’s official definition of AWS is incredibly narrow, with the result that the development of AWS technologies cannot possibly meet it. Hayley Evans, writing on the Lawfare blog, said that, “The UK defines autonomous weapons systems and LAWS in such a futuristic way that it is difficult to discern the UK position on other, less sophisticated LAWS that are actually on the cusp of development.”
AWS is likely to emerge first in the area of drones because of recent advances in the sector and because of the growing industry worldwide. Development of the technology required to evolve drones into AWS is already underway, even in the UK. According to Drone Wars UK, if it considers itself to be a responsible member of the international community, it needs to be firm in its stated convictions that it opposes the development of AWS. It can do this by supporting the development of a legal instrument that will prevent their development and deployment, as well as playing a full role in the CCW discussions. It should also bring its definitions of AWS into line with other countries to help bolster an effort to establish international treaties and legal norms at an early stage to prevent the total deployment of AWS.