The IEEE has now released an updated version, which is also open to comments from the public. Here is a look at some of the updates to issues already covered last year, as well as brand new content for the second version of the document.
New legislation like the European General Data Protection Regulation (GDPR) is due to take effect this year, which will result in organisations meeting the minimum levels of compliance – thanks to the threat of substantial fines – but may also help more forward-thinking organisations to shift their data strategies (usually marketing, product, and sales) to enable methods of harnessing volunteered intentions from customers instead of just tracking customers’ actions from a position of invisibility. Owners of digital personas find it difficult to understand what rights they are entitled to – and will be assisted by specific targeted legislation like the GDPR to have transparency over data.
Furthermore, an individual’s values or access of personal data may sometimes be in contradiction of national or regional or even local legislation. The right to privacy is regarded as a fundamental human right but it is not globally maintained and is often culturally contested. Understanding specific regional jurisdictions over data, such as Europe’s GDPR, is essential for developing A/IS that requires any personal information. AI is potentially useful for assisting governmental authorities in law enforcement and intelligence collection but it must be used in a balanced way with transparency for users. Placing personal information front and centre of regulation in this area will be key to this balancing act and will unlock the potential of A/IS while respecting an individual’s personal data.
Autonomous weapons systems (AWS) present a global challenge currently being taken very seriously by the UN’s Convention on Conventional Weapons (CCW). The convention met at the end of 2017 to discuss how to proceed with the regulation and control of AWS. Further development of the technology runs the risk of geopolitical instability and direct arms races between nations and groups, threatening international security. Critics of the CCW, such as the vocal Campaign Against Killer Robots, claim that it has been too slow to act and a fully global arms race is now underway.
Removing human oversight from battlespaces could lead to both intentional and inadvertent abuses of human rights. Ensuring that the true nature of AWS is admitted in every stage of their development – that they are inherently unethical due to their stated use as violent tools – an element of human control is recommended to ensure harm to humans is kept to a minimum and that there is meaningful human control over direct attacks.
Introducing internationally agreed standards of practice is a key objective for the CCW which would see accountability for the design and use of AWS.
Economic and Humanitarian Issues
An issue familiar to anyone involved in industries that employ or may soon be employing any forms of A/IS is the threat of automation and its implication for jobs. Whether it is the fear of losing a job entirely, having to retrain, or some, as yet unknown, effect of Industry 4.0, fundamental shifts in working practices are on the horizon. The report notes that the pace of technological change is too fast for current methods for retraining or training the workforce. Changes in investment will need to be made into the labour market to ensure that, while some jobs will be displaced, the A/IS revolution is also able to innovate the workspace, providing new jobs and skills that never existed before. Individuals must pay attention to training and retraining their skills, while larger organisations need to provide the investment platforms as foundations for them.
The legal status of AI is an issue which was not considered in the first version of Ethically Aligned Design. Based partly on the feedback that document received, it has now been placed front and centre of the Law section of the new document. Ultimately it does not recommend that AI should be recognised as legally holding any status of personhood, but it acknowledges the importance of the issue, demonstrating how far – and how quickly – development of AI/S has come, with legal and ethical questions increasingly relevant.
The conversation comes about partly because AI/S are able to showcase increasingly human qualities. One prime recent example is Sophia in Saudi Arabia, the first robot to be granted citizenship. These advances raise the issue of legal status. At present, robots cannot be fully treated as people, but is legally recognising them as property sufficient? Given how novel and new they are, perhaps an entirely new category is required. However, it has to be appropriate for legal frameworks on a global scale, given the various legal issues that employment of these technologies arise.
One of those issues is of legal accountability caused by AI/S. It is a fact of life that an AI/S will cause humans harm during their operations and when humans are increasingly removed from the decision-making processes involved, the question becomes how to apportion blame and how to hold the appropriate entity accountable. The recommendation is that AI/S legal industry experts work with legal experts to identify those laws and regulations that may be problematic or just outright not work when the decision-maker involved in an incident is a machine, instead of a human. If AI/S are still legally recognised as just property, this is one area that will be become extremely murky, and so the need for swift resolution is clear.
The psychological impact of AI is another developing area as the importance of emotions to intelligence becomes clearer: they are not a barrier to intelligence, but rather a key component of it. Artefacts that amplify or dampen those emotions will have an impact on the effectiveness of AI. Furthermore, working out what emotional effects A/IS have on humans and are likely to have on humans in the future is key to developing policy and regulatory framework around them so that they can have the best positive impact on humanity.
Further issues arise as closer and more intimate relationships with affective systems arise as it is difficult to ascertain whether any moral and ethical boundaries have been crossed in these situations. Designers and makers of such systems must be aware that our relationships with them can impact on human-social relationships, for example introducing an emotion such as jealousy. The designers are also encouraged to ensure that affective systems are not built in a way that can contribute to human isolation from others or to emotionally or sexually manipulate human users in any way – unless they are completely aware of it and clearly opt in.
The tapestry of emotive issues that surround the development of A/IS leads to the questions of how to legislate and regulate them effectively. A prime example of an affective system that needs a lengthy legal debate is a system such as a sex robot designed for private use or in the sex industry. The issue is whether that should be banned outright or strictly regulated. The report favours the latter option, with a further recommendation that the industry as a whole needs to be tailored to local laws and policies.
A/IS policy should centre around the goal of the promotion of safety, human rights, privacy, IP rights, and cybersecurity, as well as the public understanding of the impact of A/IS on society as a whole. The report advocates that five principles should guide any A/IS policies:
- Support, promote, and enable internationally recognised legal norms
- Develop workforce expertise in A/IS technology
- Include ethics as a core competency in research and development leadership
- Regulate A/IS to ensure public safety and responsibility
- Educate the public on societal impacts of A/IS
Using these five principles, it should be the objective of governments around the world to provide effective regulation of A/IS that focusses on public safety and responsibility, while also fostering a robust AI industry. In order to provide consistent legislation that provides transparency and accountability, the policy makers need to communicate with a range of expert stakeholders, such as industry and academic officials, to properly consider the questions raised by A/IS’s development and deployment.
Ethical training has been cited as integral to the successful development of A/IS. However, ethics courses are often not compulsory in degree programmes in this field. Beyond teaching ethics at degree level, there is also a need to present the makers of autonomous systems with ethical vocabulary, making it easier for them to accurately convey ethical processes in the design of their systems. Being able to demonstrate the decision process of the design of systems with particular regard to ethics ensures that transparency and accountability are also upheld.
Beyond introducing ethical discourse into the design of intelligent systems, there is also the need to examine the impact of the introduction of these systems into the workplace. A/IS has the potential to shift the dynamics and power relationships between workers and employers and mitigating these changes requires ethical guidance. A concept that has begun to take traction is Responsible Research and Innovation (RRI), having already been adopted by agencies such as EPSRC, which includes RRI principles in its mission statement. Using RRI, which is based in classical ethics theory, is recommended, from the first stage of designing A/IS all the way to how it is then employed in the world.
Diversity of ethics is also of great significance, to ensure that just one set of cultural and societal norms is not being favoured over any others. The report recommends different sources of ethics – for example from religious and philosophical traditions such as Buddhist, Shinto, Ubuntu and others, so that A/IS is globally diverse and globally ethical.
The rise in VR and AR has already begun to challenge notions of reality, blurring the line between the digital and the physical realms. This looks set to continue into the next generation as the technology moves beyond headsets to more sophisticated and subtle sensory enhancements. They will also be used in more aspects of daily life, altering perception of reality in a visible manner, as sensory tasks are increasingly delegated to software – representing an unprecedented level of trust in systems and data. The IEEE has been working to find methodologies for providing this future with an ethical skeleton, focussed on the rights and safety of the individual, particularly with reference to maintaining control of their multi-faceted identity.
Before mixed realities can truly pervade society, there need to be both legal and ethical frameworks in place, particularly for control over data. VR and AR have the potential for ubiquitous and continuous recording, they are both highly mobile and transmit information from people for substantial periods of time. The use of such personal information means the concept of privacy needs to be rethought in both public and private spaces, with the knowledge that the technologies are logging data constantly that could be vulnerable to attack or requested by law enforcement agencies. Maintaining control over personal rights to privacy and data are essential for the successful integration of mixed reality technologies and new laws will need to be drawn up to specify data ownership in these cases.
The stated goal of implementing A/IS into the world is to further the cause of humanity, and to do so ethically in a way that protects the rights of people at the individual level and the species as a whole. But what metrics can be used to determine the success of ethically designed AI or to demonstrate the well-being of humanity? There is a need for concise and clear indicators to measure such advancements but, as yet, there is not a unified understanding of what well-being indicators are or which ones are available.
Some common metrics of success include profit, GDP and economic growth. However, this is only part of the spectrum of well-being for individuals and for society. There are also psychological, social, and environmental factors, which, if they are ignored by technologists developing A/IS, will throw up future problems for humanity.
Version 2 of Ethically Aligned Design is open to feedback and comments from the public. The submission deadline is 12 March 2018 at 5PM (EST). For more details on how to submit and for the full report, visit the IEEE website, https://ethicsinaction.ieee.org/