21 Nov 2018

Ethics, technology and the future of humanity

The widely-renowned Australian moral philosopher, Professor Peter Singer, is at the forefront of thinking on the social impact and ethical implications of new technologies. In June 2018, Professor Singer gave a public lecture on ethics and technology at the World Intellectual Property Organisation (WIPO).

shutterstock
kentoh / Shutterstock.com

 

Professor Singer notes that the new technological future of artificial intelligence and super intelligent machines that are much smarter than humans raises many questions that require careful reflection. This is a summary of his lecture provided by Catherine Jewell, Communications Division, WIPO.

 

Ethics defined

When we reflect on the judgments we make, we should be able to agree on some basic principles of ethics or disagree on particular applications of those principles in different circumstances. For example, from an ethical viewpoint, we ought to be able to accept that the interests of all people are equal. My interests don't count for more than those of others elsewhere, provided similar interests are at stake. If we assume a given disease causes similar suffering in humans everywhere, then I think we can agree we should give equal weight to each patient suffering from it, irrespective of other differences.

That idea is reflected in the Universal Declaration of Human Rights and other international covenants. Ethics is not a matter of taste; it is a self-evident truth akin to the reasoning of mathematics or logic. Therefore, ethics is a matter on which there are objectively right and wrong answers. But, of course, within that idea of equal consideration of interests, there is room for different ethical views about what we ought to do and how we are to live. There are two fundamental philosophical approaches to this. 

One view says that the right thing to do, insofar as everyone's interests have equal weight, is to try to maximise the interests of everybody, to promote well-being and reduce suffering. This is the utilitarian view associated with the 18th and early 19th century English philosophers, Jeremy Bentham and John Stuart Mill, and it is still held by various contemporary philosophers, including me. The other view, associated with the 18th century German philosopher, Immanuel Kant, is the idea that certain things are inviolable; they are contrary to human dignity and must never be done.

We shouldn’t assume that evolution is guided by some kind of providence to reach the best ethical outcomes. We could imagine better outcomes: more intelligent, altruistic and compassionate humans, for example. Maybe that's what we need to do to protect the future of humanity. The utilitarian view doesn't mean that human dignity is not important. Such rights are important because they lay a foundation for a society that promotes the well-being of everyone. But that view doesn't mean that you could never act against particular human rights. Take the scenario of a runaway train heading toward a tunnel where it will kill five workers. If you divert the train it will kill only one. As a utilitarian, I think one should be prepared to sacrifice one life to save five.


Artificial intelligence and the future of humanity

The development of artificial intelligence (AI) is another important area requiring careful reflection. Increasingly, AI is being used to do work that humans can already do. In manufacturing, for example, robots are taking on the repetitive tasks formerly undertaken by workers on the factory floor. We can anticipate that the use of AI will take on these tasks in many other areas. That means we have to think about how to develop a society where there is much less need for human work, but which captures and transfers productivity gains to people – through some kind of universal basic income scheme perhaps – in a way that meets their need for a sense of purpose. This will be a very difficult challenge.

Some commentators believe the development of super intelligent machines that are significantly smarter than humans are imminent. What will that mean for the future of humanity? Will super intelligent machines decide they are better off without us? That alarming prospect could be a tragedy of unimaginable proportions ending billions of years of existence on the planet and the lost potential of all future generations of humans. Should we, then, focus on reducing, as far as possible, the risk of human extinction? Or would these super intelligent machines themselves – if they were conscious beings – have intrinsic value, equivalent to, or even superior to, our own? Most people will reject that suggestion; but perhaps we have an inherent bias in favour of our own species. We certainly need to think more about this prospect.

There are many questions facing us as we march toward this new technological future. And there are many uncertainties. My hope is that we will use technology to bring about a better life for all in a more egalitarian way that helps those who are worst off. That is where we can do the greatest amount of good. 

 


Excerpts from a lecture by Professor Peter Singer.

 

 

 


related topics