13 Feb 2019

Crossing a robotic border

The European Union (EU) is set to begin a pilot programme to protect national borders using artificial intelligence (AI). Dubbed iBorderCTRL and branded as an AI-powered lie detector, it will require passengers to upload pictures of their passport, visa, and proof of funds, before answering a series of questions to a webcam.

shutterstock
ChameleonsEye / Shutterstock.com

 

The AI will read the person’s gestures to determine whether they are acting suspiciously or satisfy the EU criteria for safe travel. It will run for six months in Hungary, Latvia, and Greece, made possible through an EU contribution of approximately €4.5 million.

Professor John Michael, Assistant Professor in the Department of Philosophy, University of Warwick highlights the logistics and some of the issues posed by the new pilot programme.

 

The researchers seem to have reflected on the ethical and legal challenges, and given thought to how the risks can be mitigated. Most importantly, they are complying with the rigorous ethical standards that are expected within research. This means for example that nobody is forced or pressured into participating in this pilot project, and that a decision not to participate won’t negatively affect anyone’s chances of entering a country. Also, that anybody who decides to do it can still change their mind and not participate, and this would also not affect their chances of entering a country. Of course we should be aware of the risk that once this technology is developed, it could be implemented in the future without these researchers having control over how it is used. It would be helpful to think of ways of ensuring that the governments or other groups who have control over this technology abide by the same ethical standards as these researchers.

The research team is also very careful to emphasise that the assessments of people’s character or intentions arrived at with the help of this technology are uncertain. In other words, the technology picks up on cues that are suggestive of deceptive intentions or of trying to hide something. People may be trying to hide something for lots of reasons that have nothing to do with their gaining access to a country (for example, they may be trying to hide the fact that they are traveling to country X in part to meet secretly with a boyfriend or girlfriend who is traveling to X on business). If someone is flagged by this system as being suspicious, it only means that she or he will undergo a bit more scrutiny than if she or he were not flagged by the system as suspicious. This may save time for people working at the border and for travellers.

What kinds of cues is the system going to pick up on? Well, to some extent we don’t really know in advance, because it is going to learn bottom-up which features of people’s behaviour tend to be most reliably associated with deceptive intentions. The starting point will be to look at things like indicators that people are departing from behavioural patterns that tend to arise spontaneously in interactions but may not if you are deliberately trying to make a certain impression. For example, there are patterns of eye contact (how frequently you make eye contact in an interaction and for how long each time) which, if disrupted, would suggest that you are deliberately paying attention to them (for example, trying to avoid eye contact or trying to avoid giving the impression that you are avoiding eye contact). If you start trying to craft these features of your behaviour consciously, you may wind up appearing a bit unnatural, which could be flagged as suspicious.

Similarly, we tend to naturally imitate each other when interacting: if I scratch my nose while you and I are talking, you are more likely to scratch your nose than if I hadn’t just done so. If people either stop doing things like this or start doing them more frequently than normal, this could be flagged as suspicious.

In addition, we do not some things quite concretely about the facial muscles involved in smiling, and can differentiate between fake and real smiles (there are different muscles involved), so a system like this may well pick up on this as well.

One practical challenge that I see to the implementation of such a system is that this kind of interaction (online with some sort of virtual agent) is already going to be a bit unnatural, and this may lead people to act differently and not exhibit the kinds of behavioural patterns that are characteristic of normal interactions in which people are comfortable and not trying to hide anything.

It will of course also be important that people have good web cameras and stable internet connections in order to minimise false positives arising from technical error.

 


related topics