24 Jul 2017

Maintaining Privacy in the Workplace

StatusToday, a startup based in London, has developed an AI platform to monitor employees in the workplace, tracking individuals behavioural patterns.

shutterstock
Shutterstock

 

Designed to work out the routines of employees, it can then flag up anomalous behaviours – such as excessive copying of files not related directly to their work – so that the employer can see if it poses a security risk. For it to work, it would involve monitoring everyone at a company, and has the capacity to be used for evaluating employee productivity at all times.

Dr. Paul Bernal, of the University of East Anglia highlights concerns over both privacy and consent with the employment of this type of technology, even suggesting that the knowledge of the omnipresentAI may even change employees’ behaviours and end up becoming counterproductive.

 

 

 

 

Dr. Paul Bernal was speaking to Robotics Law Journal's Tom Dent-Spargo.


 

Is there anything related to AI monitoring use that should be codified by a third-party (either the government or an independent organisation)?

To an extent, AI monitoring is “codified” in that it’s governed by both data protection and surveillance law.

Data protection law means that there has to be a lawful purpose for which data is gathered, and a legal basis for that gathering. Checking on security risks would be lawful, but the legal basis would be complex. Generally it would be through consent, and for “sensitive” personal data (things like sexuality, political views, health, religion, trade-union membership) that consent has to be explicit and “informed”. Web browsing data, for example, would almost certainly include sensitive personal data (it doesn’t have to be direct – if you can derive sensitive data from the data you’re gathering, that would probably be enough). Employers might seek to gain that consent by having a term in employment contracts that says you should consent to be monitored, but whether that would be sufficient is not really clear. 

Surveillance law (the old Regulation of Investigatory Powers Act (RIPA) and the new Investigatory Powers Act (the IP Act)) place limits on who can put surveillance on whom – and unauthorised surveillance can be a criminal offence. I don’t know of any convictions for employer-employee surveillance though, but I certainly don’t have an exhaustive knowledge of the subject.

The key body involved here is the Information Commissioner’s Office (ICO): they’ve provided “codes of practice” for CCTV use, for example. See https://ico.org.uk/for-organisations/guide-to-data-protection/employment/  

 

Should companies or an independent organisation provide an internal code of conduct for the use of these AI systems?

It would certainly be good practice to do so – and I suspect it will become a legal obligation once these systems become more common, and once the law comes to grips with it. A forward-looking company should be thinking about this now. I suspect that the ICO will come up with specific guidance on AI monitoring at some point.

 

If employees are given a clear choice and refuse to be monitored, should they fear for their jobs?

Theoretically no, in practice, absolutely! Employers are unlikely to be aware of their employees’ rights in this area, and are entirely likely to be prejudiced against employees who refuse to be monitored. A specialist employment lawyer would know more about this.

 

What measures do you think should be taken to protect jobs in this matter?

Personally, I think the main thing is to establish as general practice that monitoring should be strictly limited to what is really necessary. That’s what data protection law says anyway, but it is often less strictly complied with than it ought to be. Getting official guidance from the ICO would really help, and so would working with the relevant trade unions to ensure they’re aware of the issues for their members.

 

Can employees truly consent to the use of this system? It might be seen as "coming with the job".

That’s a really good question, and in practice I suspect the employers at least will see it that way. In some fields, it really does come with the job – financial services, for example, or nuclear power. In others, it really shouldn’t. 

 

Is the issue of privacy worth more or less than catching the security risks that this system is designed for?

That’s a balancing exercise that depends on the particular circumstances. Security doesn’t “trump” privacy in general – but it can in specific circumstances. A proper risk assessment needs to be done, looking at all the different aspects of the situation.

 

Are there any other risks beyond privacy related to using these systems?

Privacy isn’t a separate, specific issue, but one that underpins a whole lot of other issues. Privacy protects a vast number of other rights and freedoms: without privacy you can’t have freedom of expression, freedom of association and assembly, freedom of religion, access to information, the right to join a trade union and so forth. This isn’t just theoretical – it has been empirically demonstrated that when they are under surveillance, people are less likely to seek out information, less likely to speak out, less likely to join organisations and so forth. Whistleblowers are “chilled” too. 

There are other risks too – risks associated with systems being misused, risks of false positives, risks of errors with damaging consequences and so forth. Systems can be hacked too. And they can be very expensive, so it can be a waste of crucial resources.

 

Do we need to act now to ensure the right to privacy is protected moving forwards as these systems become more widespread?

I think we need to be talking and thinking about these things, from all the perspectives noted above. Getting the ICO involved at an early stage would be the first and most important step (in the UK).

 


related topics