03 Feb 2021

Nearly half of businesses not vetting tech for bias

Poor management of data bias could leave businesses vulnerable to ‘immense reputational and financial damage’, says Hogan Lovells report

By Madeline Anderson

Shutterstock

A report by Hogan Lovells highlights the potential for data bias to damage a business, but also what can be done to better manage the risks.

How to prevail when technology fails surveyed businesses from a range of sectors and with annual revenue anywhere between $200m and over $1bn. Respondents included GCs, heads of legal, chief information security officers, COOs and CEOs.

The centrality of technology was clear, with 61 percent of businesses surveyed agreeing that development and/or deployment of tech was a core part of their growth strategy. However, despite bias in data and programming being one of the major ethical concerns for respondents, 45 percent don’t check tech supplied to them for racial and gender bias.

The report cites discrimination that can arise when algorithms and AI, which reflect the biases of their creators, are used to review CVS in the recruitment process.  Algorithms underpinning the software might employ biased logic that discriminates against people with certain names or who live in particular areas.

“The use of AI in employee hiring decisions or in the provision of access to goods and services created significant legal risk,” Hogan Lovells partner Stefan Martin said.

If a business opts to purchase a technology rather than develop it internally, it may not be clear if it contains biases and they may only become apparent once the tech has already been deployed.

“Businesses need to be alive to the risks that exist in this area. They also need to take active steps to assess those risks and to protect themselves from them when technology fails or is shown to be biased in its use or application.”

The study also cites a lack of representative data as a stumbling block for businesses using tech-enabled products. This issue is already well documented in the US, particularly in the cases of racist facial recognition software and wearable health monitors.

Failing to address this internally can lead to an ‘inferior product’ and potentially negatively influence the development of other similar products, the report says.

Valerie Kenyon, a partner at Hogan Lovells specialising in product liability and law risks, highlighted the blurred line between a product and the decisions it begins to make when evolving on its own rather than through human intervention.

“Algorithms can adopt unwanted biases found within the data on which they are trained,” Kenyon said. “Companies looking to innovate must carefully consider each type of technology they use and work closely with their legal and privacy teams during the entire lifecycle of a product.”

The report implores firms to prioritise ethical challenges by establishing its own internal ethical principles and involving the entire business rather than leave it to those at a management level. Businesses should also hold outside technology providers to a stringent ethical standard that matches their own, but at the very least should seek warranties that software does not contain biases and conduct due diligence to check it.

 


This content is available to subscribers only. To continue reading...

Sign in to your account

Take a one-month free trial

If you aren't a subscriber, please sign up for a one-month free trial to access all Robotics Law Journal content, including:

  • All premium online content
  • Daily newsletters
  • Breaking news alerts


If you require further information, please email subscriptions@roboticslawjournal.com or contact call us on +44 (0) 20 7193 5801.