16 Mar 2021

Lawsuit claims Clearview AI violates Californians’ privacy

The controversial facial recognition software developer is being sued by activists and immigrant rights groups

By Madeline Anderson

Shutterstock

Clearview AI, the controversial facial recognition software company whose technology is used by law enforcement agencies across the US, is being sued by community-based organisations and political activists in California. 

The complaint, filed in Alameda County Superior Court last week, is part of a wider effort to restrict the use of facial recognition technology in the state.  

The plaintiffs, Hispanic social network Mijente, campaign group NorCal Resist and four individual political activists, are being represented by San Francisco boutique law firm BraunHagey & Borden and Washington DC-based immigration law specialist Just Futures Law.  

The complaint asserts New York-based Clearview has violated the privacy rights of the plaintiffs and all California residents in the process of building “the most dangerous facial recognition database in the nation”. It claims Clearview’s database has been built by "illicitly" collecting more than three billion photos of “unsuspecting individuals”, gathered through “scraping” these photos from websites including Facebook, Twitter and Venmo. It then uses algorithms to “extract the unique facial geometry of each individual depicted in the images, creating a purported ‘faceprint’ that serves as a key for recognising that individual in other images.” 

Clearview persists despite having received “multiple” requests to stop this practice, which the complaint says violates many of the websites’ terms of service and the contracts between the sites and their users. 

It also claims Clearview has provided “thousands” of government agencies and private entities access to its database, which is perhaps unsurprising considering the database is “almost seven times the size of the FBI’s.” 

Sejal Zota, legal director of Just Futures Law and lead attorney in the case, said Clearview “upends” California’s constitutional right to life without “fear of surveillance and monitoring”, saying that there can be “no meaningful privacy” as long as Clearview continues its operations.  

“Allowing Clearview to continue building its illicit surveillance database would be the end of privacy as we know it,” added BraunHagey partner Ellen Leonida, remarking that the company’s technology “gives governments and corporations unprecedented abilities to spy on us wherever we go.”  

The suit seeks an injunction to bar Clearview from collecting further biometric information in California and requiring it to delete all data on the state’s residents.  

“The scope of Clearview’s reach alone should terrify,” said senior Mijente campaign organiser Jacinta Gonzalez, adding that the mechanics of Clearview’s tech mean a single photo could lead to access of an individual’s entire digital presence, even if the person identified in the photo was merely present in the background.  

“This is going to be used to surveil us, arrest us, and in some cases deport us,” she said.  

Facial recognition has been at the center of data privacy debates in recent years over its mass surveillance capabilities and alleged racial bias, with studies showing the technology is more likely to misidentify African-American and Asian faces at higher rates than white faces. In some cases, this has led to mistaken arrests, heightening public concern over the proliferation of biometric software among law enforcement agencies. States and tech companies alike have scrambled to deal with the fallout from such privacy violations, with Bay Area cities leading the charge back in 2019 by being among the first in the US to impose restrictions on the use of facial recognition tech by local law enforcement. 

The complaint highlights Clearview’s “ties to alt-right and white supremacist organisations”, claiming “its mass surveillance technology disproportionately harms immigrants and communities of color.” 

Clearview CEO Hoan Ton-That asserts the company’s systems are free of partiality.  

“An independent study has indicated the Clearview AI has no racial bias,” he said in a statement.  

One of Clearview’s legal representatives, eminent First Amendment attorney Floyd Abrams, said the firm complies with “all applicable law” and is fully protected by the First Amendment based on a company’s right to create and disseminate information.  

In the past, Clearview has also sought legal counsel from Tor Ekeland, a lawyer known for representing hackers, as well as Jenner & Block partner Lee Wolosky, who has served under the last three US presidents in national security positions.  

Since the New York Times revealed the existence of Clearview in January last year, it has faced a series of similar lawsuits in other jurisdictions. The American Civil Liberties Union of Cook County, Illinois, recently said the company’s “scraping” practices violate the state’s Biometric Information Privacy Act, which was recently the basis of a $650m settlement against Facebook’s facial recognition photo-tagging service. The company was also hit with a lawsuit in Vermont state court, where a statute prevents corporate use of faceprints without explicit consent.  

But this legal conundrum stretches far beyond US borders. Clearview has faced international scrutiny, particularly from the EU, which claimed the company’s data processing violates the GDPR. In Canada, Clearview stopped operations after being deemed illegal by privacy commissioner Daniel Therrien, but still hasn’t met Therrien’s demands of removing Canadian citizens from its database.  

The reason Clearview’s business model is so hotly contested may lie with its willingness to go to places even Silicon Valley’s biggest players have neglected to approach. In 2011, Google said it was holding back on creating a similar facial recognition tool as it could be used “in a very bad way”. Microsoft has also refused to install facial recognition tech for US police forces over concerns of racial bias.  

In the UK, Metropolitan Police Commissioner Cressida Dick defended the use of facial recognition services in February last year, saying it is not up to police to determine the boundary between security and privacy.