11 Apr 2018

Reforming the NHS with AI

Following on from its examination of the effects of technology in the public sector last year, independent think tank Reform has released a report called Thinking on its own: AI in the NHS. It highlights the areas in which AI is currently deployed in the NHS, what its future uses can be, and what legal challenges need to be overcome to reach that destination.

By Tom Dent-Spargo

shutterstock
GUNDAM_Ai / Shutterstock.com

 

AI in the NHS Today

In maintaining one of its key features of being free at the point of entry, the NHS faces many difficult challenges in rising costs, reduced funding, and dealing with an ageing population. Technology has a major part to play in future proofing and the NHS has already recognised the role it can play, particularly how better use of data can deliver higher quality of care. 

With high quality data, AI is one of the tools that can be used to help reform the service. It has many different applications from improving outcomes to reducing cost. It can range from decision-support tools to help with diagnosis to virtual assistants which schedule better than humans, particularly significant if understaffing is an issue. So far, however, the take-up of AI in the NHS has been sparse at best. One of the reasons is that it is currently incumbent on the private sector to adopt such new technologies rather than national bodies. A number of NHS trusts and organisations do engage with technology – the NHS Innovation Accelerator is used by the South Devon and Torbay Clinical Commissioning Group (CCG) to provide a personalised self-care advice service and Moorfields Eye Hospital works with Google’s DeepMind. However, the lack of a nationalised strategy means applications of AI in the NHS have been taken up piecemeal.

 

Future Potential of AI in the NHS

The NHS strategy Five Year Forward View identifies three gaps that need to be narrowed in order to improve the patient’s experience of care, to improve the health of the population, and to reduce the cost per person. These gaps are: the health and wellbeing gap; the care and quality gap; and the efficiency and funding gap. AI can be utilised in all three areas. 

The Health and Wellbeing Gap

In this area, which focusses on prevention to improve life expectancy, AI could be used to predict the risk of illness in a population, meaning that the NHS could then deliver treatment more effectively in a targeted manner. This could take the form of wearables that monitor information related to one’s health, with AI analysing the data to keep the population better informed on their health. As well as promoting health in the community, AI can be used in a preventative manner by helping clinicians identify individuals that are likely to develop certain complications. At the School of Computer Science and the Health Informatics Centre at the University of Manchester, health data has been used to cluster individuals into groups that share similar characteristics. Using these clusters, they are finding new patterns of comorbidity that otherwise would not have been found by any one clinician or health centre. As a result, directly targeted preventative medicine can be swiftly administered. This raises legal issues around data as well as privacy and even insurance related legal implications.

 

Care and Quality Gap

AI can be employed to address the gap in delivering standardised high-quality care. With superior diagnostic capabilities, variation in the quality of decision making can be detected and used to provide personalised care. AI’s processing power means that the doctors can be kept abreast of the latest developments much more easily, because the technology has the capacity to analyse information from millions of scientific articles published each year in medical journals. 

AI can also deliver a swifter diagnosis. For example, it is up to thirty times quicker at performing mammograms than humans, further reducing the possibilities of misdiagnoses and unnecessary procedures such as biopsies. Other systems are being developed to assist in surgeries, while the NHS is already investing the app Ieso which can deliver online cognitive behavioural therapy, potentially reducing treatment times by half. 

 

Efficiency and Funding Gap

Automating tasks using AI can reduce the efficiency and funding gap. With the Royal College of Nursing claiming that 17-19% of nursing time goes towards “non-essential” paperwork, virtual assistants could provide a solution. One example of an intelligent virtual assistant is Amelia, which can automatically compose letters, send patients reminders, book appointments, and other similar tasks. It has been estimated that 750 additional routine operations a day could be performed in the NHS with better scheduling. This does of course raise employment legal issues – which have yet to be fully explored – when robots join the workforce. 

 

Data and Trust

There is still a lack of trust in AI among the UK’s population over health matters, particularly when it comes to data and privacy issues. In A PwC survey, 47% of respondents said that they would use an intelligent virtual assistant via a smartphone, tablet, or computer, but these trust levels drop in more sensitive areas: only 37% would use AI to monitor a heart problem and just 3% would use it to monitor pregnancy. 

Healthcare AI runs on accessing datasets of individuals and the population at large, but with such sensitive personal data at stake in the NHS, there is a reticence to share this data. Often patients do not know what is happening with their data and do not like the idea of it being used beyond their direct care. Possible solutions could be found such as making the consent process as transparent as possible, using an opt-out model to give patients the choice and engaging with them over trust issues. Data trusts could also be formed (as put forward by the report Growing the Artificial Intelligence Industry in the UK) which would allow for much easier sharing of data with industry players, navigating the tricky legal barriers. 

Information governance frameworks identify how different types of patient data come with their own set of duties and responsibilities for those accessing and controlling it. For example, identifiable data – any data that can identify a specific individual – requires explicit consent from the patient. De-personalised data (data that has had any identifiers such as names or addresses stripped and replaced by artificial identifiers) and anonymous data (which has no identifiers and cannot be used to identify an individual) do not need specific consent as long as there is no chance of harm being caused by the data. However, there is still some way to go before healthcare data solutions fully satisfy patients. 

 

Verification and Transparency

Healthcare is a high-risk area, where one mistake can cause huge damage, so great care must be taken to implement AI to ensure it operates with consistency. Transparency of AI’s decision-making capacity will allow verification that it is functioning correctly, and it is vital that the designers of the AI can prove and validate its performance. Being technically sound is not enough though. Given the highly pressurised environment of healthcare, AI systems must also be able to demonstrate how they will deal with unexpected hazards. This demands extensive stress-testing before being deployed. Regulation needs to ensure that the risk is at an acceptable level but that innovation is not stifled.

In order to verify an AI’s processes clearly, it is necessary to understand how it makes use of data. This becomes increasingly difficult with more complex systems, and many machine learning systems are inherently “black box”. So while it may be possible to predict the effectiveness of an AI, explaining how it got to the solution may not. Access to the AI’s code also presents an issue for the designers as it is their intellectual property, and standard machine learning is not predisposed to casual reasoning, creating difficulties in truly understanding why an AI reached its final decision. The recommendation is to create a reliable framework for increasing the transparency of AI systems, to have some measure of reliable “AI explainability”. This would require every organisation in the NHS that deploys an AI system to state clearly what the purpose of it is, what types of data are being used, in what way it is being used, and how they will protect anonymity.

 

Safety and Ethics

Of central concern for the NHS when regulating AI is safety and ethics. It has been proven that human biases can easily carry over to AI systems, as they are the result of subjective human labour. Often these biases emerge through inattention and carelessness, and healthcare inequalities could be further entrenched by such an approach. In order to minimise bias, which is far easier than retroactively de-biasing data, part of the certification process of regulating healthcare AI should include access for the regulatory agency of data concerning pre-processing procedures and training.

Finally, AI is viewed as a decision-support tool, augmenting a human doctor’s capabilities instead of replacing them. This means that accountability and liability rest on the human doctor and there are currently frameworks in place that can be followed. It is important to note that human actions can be influenced by the recommendation of a machine, even against their own judgment, which can cause serious problems if the machine’s recommendation turns out to be the wrong one. Setting out clearly the procedure for interacting with intelligent systems in healthcare is necessary to lower that risk.


 

About Reform

Reform is an independent, non-party think tank whose mission is to set out a better way to deliver public services and economic prosperity. It aims to produce research on the core issues of the economy, health, education, welfare, and criminal justice, and on the right balance between government and the individual. For the full report, visit www.reform.uk

 


related topics