13 Feb 2017

Artificial Intelligence – a Major Global Risk

In the World Economic Forum’s Global Risks Report 2017, AI takes a lead role.

123rf

Every step forward in artificial intelligence (AI) challenges assumptions about what machines can do. Myriad opportunities for economic benefit have created a stable flow of investment into AI research and development, but with the opportunities come risks to decision-making, security and governance. Increasingly intelligent systems supplanting both blue- and white-collar employees are exposing the fault lines in our economic and social systems and requiring policy-makers to look for measures that will build resilience to the impact of automation.

Leading entrepreneurs and scientists are also concerned about how to engineer intelligent systems as these systems begin implicitly taking on social obligations and responsibilities, and several of them penned an Open Letter on Research Priorities for Robust and Beneficial Artificial Intelligence in late 2015. Whether or not we are comfortable with AI may already be moot: more pertinent questions might be whether we can and ought to build trust in systems that can make decisions beyond human oversight that may have irreversible consequences.

Growing Investment, Benefits and Potential Risk

By providing new information and improving decision-making through data-driven strategies, AI could potentially help to solve some of the complex global challenges of the 21st century, from climate change and resource utilization to the impact of population growth and healthcare issues. Start-ups specializing in AI applications received US$2.4 billion in venture capital funding globally in 2015 and more than US$1.5 billion in the first half of 2016. Government programmes and existing technology companies add further billions. Leading players are not just hiring from universities, they are hiring the universities: Amazon, Google and Microsoft have moved to funding professorships and directly acquiring university researchers in the search for competitive advantage.

Machine learning techniques are now revealing valuable patterns in large data sets and adding value to enterprises by tackling problems at a scale beyond human capability. For example, Stanford’s computational pathologist (C-Path) has highlighted unnoticed indicators for breast cancer by analysing thousands of cellular features on hundreds of tumour images, while DeepMind increased the power usage efficiency of Alphabet Inc.’s data centres by 15%. AI applications can reduce costs and improve diagnostics with staggering speed and surprising creativity.

The generic term AI covers a wide range of capabilities and potential capabilities. Some serious thinkers fear that AI could one day pose an existential threat: a “superintelligence” might pursue goals that prove not to be aligned with the continued existence of humankind. Such fears relate to “strong” AI or “artificial general intelligence” (AGI), which would be the equivalent of human-level awareness, but which does not yet exist. Current AI applications are forms of “weak” or “narrow” AI or “artificial specialized intelligence” (ASI); they are directed at solving specific problems or taking actions within a limited set of parameters, some of which may be unknown and must be discovered and learned.

Tasks such as trading stocks, writing sports summaries, flying military planes and keeping a car within its lane on the highway are now all within the domain of ASI. As ASI applications expand, so do the risks of these applications operating in unforeseeable ways or outside the control of humans. The 2010 and 2015 stock market “flash crashes” illustrate how ASI applications can have unanticipated real-world impacts, while AlphaGo shows how ASI can surprise human experts AI has great potential to augment human decision-making by countering cognitive biases and making rapid sense of extremely large data sets: at least one venture capital firm has already appointed an AI application to help determine its financial decisions. Gradually removing human oversight can increase efficiency and is necessary for some applications, such as automated vehicles. However, there are dangers in coming to depend entirely on the decisions of AI systems when we do not fully understand how the systems are making those decisions.

Risks to Decision-Making, Security and Safety

In any complex and chaotic system, including AI systems, potential dangers include mismanagement, design vulnerabilities, accidents and unforeseen occurrences. These pose serious challenges to ensuring the security and safety of individuals, governments and enterprises. It may be tolerable for a bug to cause an AI mobile phone application to freeze or misunderstand a request, for example, but when an AI weapons system or autonomous navigation system encounters a mistake in a line of code, the results could be lethal.

Machine-learning algorithms can also develop their own biases, depending on the data they analyse. For example, an experimental Twitter account run by an AI application ended up being taken down for making socially unacceptable remarks; search engine algorithms have also come under fire for undesirable race-related results. Decision-making that is either fully or partially dependent on AI systems will need to consider management protocols to avoid or remedy such outcomes.
AI systems in the Cloud are of particular concern because of issues of control and governance. Some experts propose that robust AI systems should run in a “sandbox” – an experimental space disconnected from external systems – but some cognitive services already depend on their connection to the internet. The AI legal assistant ROSS, for example, must have access to electronically available databases. IBM’s Watson accesses electronic journals, delivers its services, and even teaches a university course via the internet. The data extraction program TextRunner is successful precisely because it is left to explore the web and draw its own conclusions unsupervised.

On the other hand, AI can help solve cybersecurity challenges. Currently AI applications are used to spot cyberattacks and potential fraud in internet transactions. Whether AI applications are better at learning to attack or defend will determine whether online systems become more secure or more prone to successful cyberattacks. AI systems are already analysing vast amounts of data from phone applications and wearables; as sensors find their way into our appliances and clothing, maintaining security over our data and our accounts will become an even more crucial priority. In the physical world, AI systems are also being used in surveillance and monitoring – analysing video and sound to spot crime, help with anti-terrorism and report unusual activity. How much they will come to reduce overall privacy is a real concern.

Artificial Intelligence and the Future of Warfare

One sector that saw the huge disruptive potential of AI from an early stage is the military. The weaponization of AI will represent a paradigm shift in the way wars are fought, with profound consequences for international security and stability. Serious investment in autonomous weapon systems (AWS) began a few years ago; in July 2016 the Pentagon’s Defense Science Board published its first study on autonomy, but there is no consensus yet on how to regulate the development of these weapons.
The international community started to debate the emerging technology of lethal autonomous weapons systems (LAWS) in the framework of the United Nations Convention on Conventional Weapon (CCW) in 2014. Yet, so far, states have not agreed on how to proceed. Those calling for a ban on AWS fear that human beings will be removed from the loop, leaving decisions on the use of lethal force to machines, with ramifications we do not yet understand.

There are lessons here from non-military applications of AI. Consider the example of AlphaGo, the AI Go-player created by Google’s DeepMind division, which in March last year beat the world’s second-best human player. Some of AlphaGo’s moves puzzled observers, because they did not fit usual human patterns of play. DeepMind CEO Demis Hassabis explained the reason for this difference as follows: “Unlike humans, the AlphaGo program aims to maximize the probability of winning rather than optimizing margins”. If this binary logic – in which the only thing that matters is winning while the margin of victory is irrelevant – were built into an autonomous weapons system, it would lead to the violation of the principle of proportionality, because the algorithm would see no difference between victories that required it to kill one adversary or 1,000.

Autonomous weapons systems will also have an impact on strategic stability. Since 1945, the global strategic balance has prioritized defensive systems – a priority that has been conducive to stability because it has deterred attacks. However, the strategy of choice for AWS will be based on swarming, in which an adversary’s defence system is overwhelmed with a concentrated barrage of coordinated simultaneous attacks. This risks upsetting the global equilibrium by neutralizing the defence systems on which it is founded. This would lead to a very unstable international configuration, encouraging escalation and arms races and the replacement of deterrence by pre-emption.

We may already have passed the tipping point for prohibiting the development of these weapons. An arms race in autonomous weapons systems is very likely in the near future. The international community should tackle this issue with the utmost urgency and seriousness because, once the first fully autonomous weapons are deployed, it will be too late to go back.

Conclusion

Both existing ASI systems and the plausibility of AGI demand mature consideration. Major firms such as Microsoft, Google, IBM, Facebook and Amazon have formed the Partnership on Artificial Intelligence to Benefit People and Society to focus on ethical issues and helping the public better understand AI. AI will become ever more integrated into daily life as businesses employ it in applications to provide interactive digital interfaces and services, increase efficiencies and lower costs. Superintelligent systems remain, for now, only a theoretical threat, but artificial intelligence is here to stay and it makes sense to see whether it can help us to create a better future. To ensure that AI stays within the boundaries that we set for it, we must continue to grapple with building trust in systems that will transform our social, political and business environments, make decisions for us, and become an indispensable faculty for interpreting the world around us.

This article is adapted from part 3.2 of the Global Risks Report 2017 written by Nicholas Davis and Thomas Philbeck from the World Economic Forum.


related topics