Clearview AI controversy highlights rise of high-tech surveillance
Facial recognition software developed by tech company Clearview AI has sparked a global backlash. At the DW Global Media Forum, the company's CEO defended the technology. But regulators have zeroed in on his firm.
![](https://static.wixstatic.com/media/b1125b_1db73dcaa1e54f2f8a4e08258b7d344c~mv2.jpg/v1/fill/w_980,h_653,al_c,q_85,usm_0.66_1.00_0.01,enc_auto/b1125b_1db73dcaa1e54f2f8a4e08258b7d344c~mv2.jpg)
You don't want your face to appear in the database of Clearview AI?
The company's CEO doesn't seem to care.
"All the information we collect is collected legally and it is all publicly available information," Hoan Ton-That said Monday during DW's Global Media Forum (GMF), addressing criticism that the firm's controversial technology infringes on the privacy of hundreds of millions.
Privacy activists recently lodged data protection complaints against Clearview AI in five European countries. They argue that the software — a search engine for faces combing through billions of photos — violates the UK’s and the EU's strict privacy rules.
The controversy highlights how, as artificial intelligence technology matures, it could give rise to surveillance on an unprecedented scale.
Amos Toh, a senior researcher at NGO Human Rights Watch, warned during the GMF that governments and companies increasingly deploy facial recognition to spy on their citizens and customers. The technology uses AI to identify individuals in images.
"We have seen facial recognition being used in Russia to detain peaceful protesters, we have seen facial recognition being used on children in Argentina,"
Toh told the conference, which was held mostly online due to the pandemic.
He added that facial recognition technology is often inaccurate and prone to discriminate against minorities.
"And even if it is accurate, there is also immense potential for … human rights abuses," he said.
How Clearview AI surveillance works
Hundreds of companies around the world are working on facial recognition software. Analysts estimate that the global market was worth around $10 billion (€8.25 billion) last year.
And yet no other firm has sparked as much backlash as Clearview AI.
The firm's technology is based on a biometric database of billions of photos scraped from websites including Facebook, Twitter, or Instagram.
Once paying customers upload a photo, the program spits out all other images it has of the person, plus information on who he or she likely is.Law enforcement agencies have defended using the software as a tool to identify victims and perpetrators of child abuse or to fight terrorism. US police have deployed it, for instance, to identify some of the rioters involved in the storming of the US Capitol on January 6.
CEO Ton-That said during the GMF that his company, which has reportedly shut down all the accounts it had with private companies, "only sells to law enforcement at this time."
He added that "thousands and thousands of crimes have been solved that otherwise wouldn't have been."
But Human Rights Watch's Amos Toh shot back that nonetheless, "the potential for abuse in many cases outweighs any purported benefits."
"We know that facial recognition certainly has been very effective in facilitating human rights abuses here [in the US] and abroad," he told the conference.
Commenti