Examining a range of different topics, and giving out recommendations, the report analyses different subjects under four high-level principles:
- Human Benefit, to ensure that AI/AS are designed and operated in a way that respects humans’ rights and freedoms, and that they are verifiably safe in their operational lifetime.
- Responsibility, to ensure that AI/AS are accountable by clarifying issues of culpability in legislature and courts, and having systems of registration set up while manufacturers are required to provide programmatic-level accountability.
- Transparency, where it is always possible to discover how and why an AI/AS made a decision or acted in a particular manner.
- Education and awareness, of ethics and security, to both users and other stakeholders, such as law enforcement, to minimise the risks of AI/AS being misused.
- Recommendations
The recommendations from the report are to prioritise the values that reflect the shared set of values of the larger stakeholder groups. The Common Good Principle could even be used as a guideline to resolve differences in the priority order of different stakeholders. Having a rationale for the priority ranking is also important, as it will mean the designers reflect on the cultural values at the time of creation and allow a third party a point of reference.
Human biases can be embedded into AI/AS, whether intentional or accidental, and this can be harmful. How a system senses the world and then processes that sensory information are the main points of entry for such biases, so the programming of information has be held under intense scrutiny, to ensure that no group of vulnerable people can be targeted. Inclusion of intended stakeholders in the entire engineering process is recommended by the report to highlight any potential conflicts or points of bias.
Education
Of particular concern is the fact that ethics is not part of degree programs that relate to the design and construction of intelligent systems, meaning that the people involved in the creation of AI/AS are unable to properly translate ethical concerns into mathematical programming. The recommendation in this report is for a cross-disciplinary approach, in order to integrate ethics into this area, bringing together humanities, social sciences, science, and engineering disciplines.
Instituting elements such as Chief Values Officers into companies to implement ethical design, as well as Codes of Conduct in the workplace that are updated to include AI/AS in their scope are highlighted. Other methodological concerns centre on the fact that a lack of documentation hinders ethical design, meaning software engineers should be required to properly document all their systems to provide transparency and accountability, as well as to demonstrate the ethical values that have been embedded, which would complement the establishment of third-party independent review structures.
Autonomous Weapons Systems (AWS)
The obvious facet of AWS is their intended use to harm, which raises further ethical considerations than other traditional weapons or autonomous systems that are not intended to cause harm. The lack of standards for guaranteeing the compliance of AWS with relevant legal or ethical standards is highlighted by the report, with the recommendation that there should be efforts to standardise comprehensive verification and validation protocols for AWS, and the discussions should involve stakeholders such as humanitarian organisations.
Economics/Humanitarian Issues
Media representation of AI and robots in the area of jobs is often confusing to the public, and acts on a utopia/dystopia binary image, which is oversimplified and misleading. Furthermore, complexities over how automation is going to affect employment is neglected, with analysis focussing on just the number of jobs lost or created, as oppose to examining how the structures surrounding employment will change and adapt to automation.
If a clear, independent clearinghouse can properly disseminate objective statistics over automation, to inform the general public and other stakeholders, and focus the analysis of automation onto the more complex issue of employment structures, the risks of robots “taking over” can be minimised, both in reality and in public perception.
Worldwide organisations, such as the United Nations, can work with stakeholders from government, NGO communities, and corporate areas to ensure that effective education, global standardisation, and promotion of the distribution of knowledge can occur.
Law and Transparency
Providing clear and accountable legislation is vital to the continued development of AI/AS. The report recommends that visibility of AI/AS design process should be increased, to make these systems accountable and verifiable, which will aid the regulation process.
Governmental decision-making is increasingly being automated. It is essential therefore to guarantee legal commitments of transparency, participation, and accuracy when AI/AS make important decisions about individuals. In order to do so, the report recommends that governments should not employ AI/AS that cannot provide an account of the law and facts essential to decisions or risk scores and that AI systems should be designed with transparency and accountability as primary objectives, built in from the bottom-up, allowing for better inspection and verifiability to the overseers of the systems, as well as the interested stakeholders.
AI industry legal counsel should work closely with legal experts to identify laws that are less likely to work when humans are not the decision makers. To ensure the safe use (and avoiding misuse) of AI/AS, such measures as implementing a payment system for liable AI, similar to worker’s compensation scheme, which could protect high costs of law suits or victims being required to provide an unreasonably high standard of proof when seeking damages.
Companies that use and manufacture AI/AS should also be required to have written policies stating the exact use of their systems, who is qualified to use them, what training is required, and what operators should expect from the AI itself. This will provide a clear picture of the AI’s capabilities, as well as covering the company in litigation.
Liability and intellectual property also need to be clarified. The report suggests that liability should also not be automatically assigned to the person who switched on the AI, instead looking to the person who manages or oversees the AI. Intellectual property statutes should be reviewed to clarify whether amendments are required in relation to the protection of works created by the use of AI. The basic rule should be that when an AI product relies on human interaction to create new content or inventions, the human user is the author or inventor and receives the same intellectual property protection as if he or she had created the content or inventions without any help from AI.
The document is still in version 1 and the IEEE are inviting comments from the public. The submission deadline is 6th March 2017 and for more information on how to submit, as well as the full report, visit the IEEE’s website.