It started on 30 December 2019. Artificial intelligence systems identified the first clues of the coronavirus outbreak by scanning news images and social media posts from the market in Wuhan, China, where the outbreak is believed to have started.
It was a matter of days before the World Health Organization (WHO) released its risk assessment, and then it took another month for WHO to declare a global public health emergency for the novel coronavirus. The AI sources which broke the news included BlueDot, which uses an AI-based solution to monitor outbreaks of infectious diseases around the world.
The outbreak was also identified early by AI-based tools HealthMap at Boston Children’s Hospital and Metabiota in San Francisco.
Although AI has played a useful role in our response to the pandemic, the case can be overstated. At the same time that AI systems were flagging the pattern, it had also been picked up locally on social media with doctors and healthcare workers sharing their concerns and experiences. And, of course, the devastating impact of Covid-19 illustrates how ill-equipped governments and health authorities were to respond to it, especially outside Asia.
Jacob Turner, a lawyer and author of Robot Rules: Regulating Artificial Intelligence, is nevertheless optimistic about the positive contribution AI can, and is making. “AI is playing multiple roles in respect of Covid-19: medically, from accelerating the process of vaccine discovery, to analysing CT scans of patients’ lungs,” he says. “AI systems are also now being put to work in modelling the spread of the virus, though this use case is a good example of the fact that many models are only as good as the data provided.”
There has certainly been a flood of new research interest in this field, including the launch of several competitions to harness AI in the fight against coronavirus.
These include Kaggle’s Covid-19 Open Research Dataset Challenge, which is supported by bodies including the National Institute of Health and the White House and whose call to action to the world’s AI experts is to ‘develop text and data mining tools that can help the medical community develop answers to high priority scientific questions’.
The Decentralized Artificial Intelligence Alliance is putting together Covidathon, an AI hackathon to fight the pandemic coordinated by SingularityNET and Ocean Protocol; and MIT Solve a marketplace for social impact innovation – has established the Global Health Security and Pandemics Challenge.
On 28 April, the newly formed C3.ai Digital Transformation Institute – a joint project between C3.ai, Microsoft and an array of top US universities – announced the first three recipients of grants as part of its inaugural programme: Using AI to Mitigate Covid-19 and Future Pandemics. The teams sharing $1m in grants are: developing a new model to predict the spread of Covid-19; building a system to track property evictions to inform US public policy on housing inequality; and developing computational techniques to interpret medical images to help with the surveillance, detection and triaging of Covid-19.
“These first three research projects represent the breadth of solutions for Covid-19 mitigation that artificial intelligence can bring to bear on fields as disparate as medicine, urban planning, and public policy,” said C3.ai’s chief executive Thomas Siebel.
Condoleezza Rice, former US Secretary of State, and Hoover Institution fellow and director designee, is an enthusiastic advocate of the institute’s potential. “We are collecting a massive amount of data about MERS, SARS, and now Covid-19,” she said “We have a unique opportunity before us to apply the new sciences of AI and digital transformation to learn from these data how we can better manage these phenomena and avert the worst outcomes for humanity.”
Attitudes towards tech
There has been much reflection about how society may change as a result of Covid-19, including attitudes towards the use of technology.
Turner believes both the public and private sectors have been forced to become more reliant on it. “One of the barriers to AI adoption in some sectors is a natural desire for human decision-making,” he observes. “Although there will eventually be a reversion to face-to-face contact, this process will be slow. I expect in the meantime people will become increasingly comfortable with technology of all kinds (AI included) replacing or supplementing human efforts.”
A good example of this are chatbots that make use of smart algorithms and natural language processing to disseminate information. A chatbot called Bold360ai is already on the market able to interpret complex language for customers. As Bold360ai reportedly holds textual conversations, it ‘remembers’ context.
Last month, WHO teamed up with customer experience management platform Sprinklr to launch an AI-powered chatbot on Facebook Messenger to disseminate information about Covid-19 in four languages.
The initiative was part of the WHO Technology for COVID-19 Initiative, a pro-bono collaboration of technology companies brought together to fight the pandemic.
Professor Guang-Zhong Yang, founding dean of the Institute of Medical Robotics at Shanghai Jiao Tong University, says: “Robots can be really useful to help you manage this kind of situation, whether to minimize humanto- human contact or as a front-line tool you can use to help contain the outbreak.”
While the robots currently being used can only rely on technologies that are mature enough to be deployed, he argues that roboticists should work more closely with medical experts to develop new types of robots for fighting infectious diseases.
“What I fear is that, there is really no sustained or coherent effort in developing these types of robots,” he says. “We need an orchestrated effort in the medical robotics community, and also the research community at large, to really look at this more seriously.”
One diagnostics tool being piloted in hospitals by The Stamford Institute for Human-Centered Artificial Intelligence is using depth and thermal sensors to spot Covid-19 symptoms among the elderly.
The sensors are able to spot early Covid-19 symptoms which could include temperature variations or changes in people’s movements, for example an alert may be triggered if they remain seated for longer periods. The goal is to deploy them in people’s homes. However, one challenge for the team is the need to protect privacy.
The institute’s co-director, renowned computer scientist Fei-Fei Li, told the Exponential View podcast that the team was exploring how to build privacy into the technology and rejected the notion there always needed to be a trade-off between technology and privacy in the application of AI.
“The human aspect: privacy, respect, dignity, should not be an afterthought,” she said. “From that point of view, I would not call it a trade-off. It is just part of the equation.”
Yang agrees. “Respecting privacy, and also being sensitive about individual and citizens’ rights, these are very, very important,” he says. “Because we must operate within this legal ethical boundary. We should not use technologies that will intrude in people’s lives.”
Turner argues that ethical dilemmas associated with the pandemic, such as how to balance the need to protect the elderly and the importance of limiting long-term damage to economies, have parallels with AI.
“We face similar ethical dilemmas whenever we delegate decisions to AI: how should AI take such decisions, and are there any decisions which AI should not take? For the last ten years many have ignored these problems as they apply to AI – the question of who a self-driving car should prioritise in the event of a crash is often asked but rarely answered. Pandemics force governments and regulators to engage with ethical issues, and my hope is that this level of engagement will assist in shaping AI policy and regulation in the future.”
A good example of this is the drive by several countries across the world to roll out contact tracing apps on smartphones.
‘It is highly unlikely... that those who designed what would become the smartphone back in the early 1990s could have anticipated it being considered the “go-to” solution for resolving the challenges the currentpandemic presents,’ wrote Reema Patel, head of public engagement at UK AI and technology think tank the Ada Lovelace Institute in a recent blog.
She was warning against technology ‘solutionism’ – an over-reliance on the ability of technology to solve complex problems that ‘works well for the purveyors of smartphones and digital contact tracing apps... [but] works less well for those looking for multi-faceted interventions to resolve complex problems.’
However, the scramble to find a way out of the crisis – and the critical need to maintain public trust in these solutions for them to work – is forcing governments, regulators and privacy experts to engage with radical solutions.
In April, the Ada Lovelace Institute published a rapid evidence review into the implications of contact tracing apps that drew on the thinking of an array of legal academics. Exit Through the App Store? notes this is first pandemic of the algorithmic age, and asks ‘whether, and how, the UK Government should use technology to transition from the Covid-19 global public health crisis’.
It calls for ‘the introduction of primary legislation to regulate data processing and to impose strict purpose, access and time limitations on its use, which would also address concerns about other data-driven measures such as symptom tracking’.
“The Government is right to explore non-clinical measures in its response to the COVID-19 crisis,” said Carly Kind, director of the Ada Lovelace Institute.
“But it must take action to ensure technological applications, such as the proposed NHS rollout of digital contact tracing, do not become counterproductive because of a failure to take account of both the barriers to deployment and the full impact on people and society.’
EU White Paper on AI
Turner welcomes the publication in January of the EU’s White Paper on AI regulation, which seeks to strike a balance between the need to innovate and the protection of rights such as privacy, as a move in the right direction.
“Regulators are increasingly working with the private and public sector to understand how AI is being used and what problems could arise, but this work needs to go faster,” he says. “Although it is sometimes thought that regulation stifles innovation, in fact if regulation is done well then it can provide a stable framework for technology development, because companies will be able to operate with greater certainty. Regulation can also increase public trust, which in turn leads to a greater uptake of technology.”
He highlights WHO as being a trusted supra national body that is well placed to lead this debate and foresees greater international collaboration.
Back to the critical role of people and 30 December 2019.
Wuhan Central Hospital doctor Li Wenliang had warned his former classmates about the virus in a social media group. This resulted in a summons from the local authorities to answer questions a matter of hours later.
Dr Li died on 7 February after contracting the virus. Days before, he told The New York Times that it would have been better if officials had disclosed information about the epidemic earlier. “There should be more openness and transparency,” he said.
Many a media headline has likened the situation we face to a war, and perhaps there is some merit to the description.
Endorsing the C3.ai Digital Transformation Institute project, French statesman Jacques Attali said: “We are at war and we must win it! Using all means.”
He added that the project “will organise global scientific collaboration for accelerating the social impact of AI, and help to win this war, using new weapons, for the best of mankind.”
Historically, much robotics and technology research and development has been driven by the military. It would be a welcome step forward for humankind if Covid-19 led to similar advances in the fields of medicine.
Sadly, the moment is likely to pass with old priorities springing back quicker than the economies.