By Labhaoise Ní Fhaoláin, Consultant, Head of Knowledge and Business Development, DAC Beachcroft Dublin
The road to personhood?
To make the chatbots experience comfortable and reassuring, AI is being deployed to make the bots “more human.” This is done by making responses seem as if they are typed at human speed, and even throw in a typo correction now and then. Also, by making them slightly less efficient in reaching the conclusion, but more engaging with social norms while initiating a conversation (“Hi, my name is..., how can I help?”) or seeking to reassure (“let me check that for you”).
These “improvements” in social interaction also raise the Turing Test bar - how we can distinguish between interaction with a bot versus a human. Could the laws on agency be tweaked and applied as if the chatbot were a legal agent, in the traditional sense? Some thought has been given to the conferral of personhood (similar to the legal fiction of the status of a corporate body) on AI systems under which the AI system could own assets and be sued in court.
The EU Parliament made a recommendation that AI be given personhood (akin to corporate personality) albeit requiring a human backing but one requiring, for instance, insurance for liability. The Parliament passed a resolution on 17 February 2017 recommending to the Commission that a “specific legal status” be given to robots. In an open letter, industry and experts stated that the natural person model should not apply because - among other issues - this would confer human rights on a robot which would be in contravention of the European Convention of Human Rights. However, the EU Commission did not take up the Parliament’s recommendation.
Law-makers are experts in finding ways to apply existing law to new problems. We can look back to the 6th century when the Irish Brehon Law principal of “to every cow its calf” was pivoted to protect copyright, when the High King of Ireland in passing judgment declared “to every book its copy.” This approach remains through the ages, and given the importance of consumer protection in the face of innovations in AI, whether existing law can be pivoted to protect consumers should continue to be explored. For example, can legal agency be applied to chatbots?
Agency means different things to different people
Philosophers define agency as the capacity of an actor, or person with free will, to act in a given environment. In the social sciences, it is the capacity of individuals to act independently and to make their own free choices. From a psychology perspective, agency entails intentionality and implies the ability to perceive and to change the environment of the agent. There is a danger that a perceived lack of intent on the part of chatbots leave them to be considered only as mechanical devices.
From a legal perspective, the agency relationship is a fiduciary one. Subject to certain exceptions, it is possible to engage another person to enter into a contract on your behalf - that person is your agent. The basic tenets of an agency relationship are capacity to act and consent. The capacity to act requires the agent to be able to carry out the necessary tasks in order to bind the principal. The agent can bind a principal even if the agent could not bind him or herself. For example a minor can act as an agent and bind the principal in contract even if that minor could not enter into the contract him/herself by virtue of not having the legal capacity to do so. An agent may act gratuitously and need not benefit from being an agent.
Therefore an agency relationship may be distinguished from other forms of contract in which consideration is a key element. The doctrine of undisclosed principal is long established, and the identity of the principal is a “material ingredient” in agency. The disclosure and identity of the agent may also be relevant.
Applying legal agency to chatbots
Whether the laws of agency could apply to chatbots are only relevant to the external aspect of agency (the interaction between the agent and third parties) rather than the internal aspect of agency (the agent interacting with the principal). There are parallels between a minor acting as an agent and a chatbot -- neither has the capacity to contract for him/her/itself but can contract on behalf of the principal. There is no consideration required for a relationship of agency to exist. Therefore there is no need for the chatbot to benefit in any way.
It has been held that the identity of the principal may be “material ingredient” to an agreement in some cases. The nature of the agent (whether human or not) may also be relevant. The question of misrepresentation may arise and whether, and to what extent, the third party would have altered his/her course of action had they known that they were interacting with a chatbot may be relevant. Under Californian law, from 1st July 2019 chatbots must disclose that they are not human, if used for the purposes of exercising influence over a commercial transaction or over a vote in an election.
Given the speed at which we are developing technology, there is a great need for lawyers, scientists, engineers and ethicists to work together to ensure that our technology laws are fit for purpose – not only the technology which is widely available but also technology which is in its infancy.