We also don’t know which police departments have facial recognition technology, because it’s common for police to obscure their procurement process. There is evidence, for example, that many departments buy their technology using federal grants or nonprofit gifts, which are exempt from certain disclosure laws. In other cases, companies offer police trial periods for their software that allow officers to use systems without any official approval or oversight. This allows companies that make face recognition systems to claim their products are in wide use–and give the outward impression they’re both popular and reliable crime-solving tools.
Protected algorithms that don’t serve
But if facial recognition is known for anything, it’s how unreliable it is. As we report in the show, in January London’s Metropolitan Police debuted a live facial recognition system that in tests had an accuracy rate of less than 20%. In New York City, the Metro Transit Authority trialed a system on major thoroughfares with a reported rate of 0% accuracy. The systems are often racially biased as well–one study found that in some commercial systems, even in lab conditions error rates in identifying darker skinned women were around 35%. While reporting for the show, we found that it’s not uncommon for police to alter photos to improve their chances of finding a match. Some even defended the practice as critical to doing good police work.
Two of the most controversial and advanced companies in the field, ClearviewAI and NTechLabs, claim to have solved the “bias problem” and reached near-perfect accuracy. ClearviewAI asserts that it’s used by around 600 police departments in the US (some experts we spoke to were skeptical of that figure). NTechLabs, based in Russia, has signed on for live video facial recognition throughout the city of Moscow.
But there is almost no way to independently verify their claims. Both companies have algorithms that sit on databases of billions of public photos. The National Institute of Standards and Technology (NIST), meanwhile, offers one of the few independent audits of face recognition technology. The NIST Vendor Test uses a much smaller dataset, which along with the quality and diversity of the images in the database, limits its power as an auditing tool. ClearviewAI has not taken NIST’s most recent test. NTechLabs has taken the static image test and performed well, but there is no currently used test for live video facial recognition. There is also no independent test specifically for bias.
Recognition in the streets
The recent wave of Black Lives Matter protests, sparked by Floyd’s death, have called into question much of what we’ve accepted about modern policing, including their use of technology. The dark irony is that, when people take to the streets to protest racism in policing, some police have used cutting-edge tools with a known racial bias against those assembled. We know, for example, that the Baltimore police department used face id on protestors after the death of Freddie Gray in 2015. And we know that a handful of departments have put out public calls for footage of this year’s protests. It’s been documented that police in Minneapolis have access to a range of tech, including ClearviewAI’s services. According to Jameson Spivack of the Center on Privacy and Technology at Georgetown University, who we interview in the show, if face recognition is used on BLM protests, it’s “targeting and discouraging Black political speech specifically.”
After years of struggle for regulation, by mostly Black and brown-led organizations, we’ve never been at a better moment to really change. Microsoft, Amazon and IBM have all announced discontinuations or moratoriums of their face recognition products. In the past several months, a handful of major US cities have announced bans or moratoriums on the technology. On the other hand, the technology is moving rapidly. The systems’ capabilities–as well as potential for misuse and abuse–will continue to grow by leaps and bounds. We’re already starting to see police departments and technology providers move beyond static, retrospective face recognition systems to live video analytics that are integrated with other types of data streams like audio gunshot surveillance systems.
Some of the police officers we spoke to said they shouldn’t be left with archaic tools to fight crime in the 21st century. And it’s true that in some cases, technology can make policing less violent and less prone to human biases.
But after months of reporting out our audio miniseries, I was left with a feeling of foreboding. The stakes are growing by the day, and so far the public has been left far behind in its understanding of what’s going on. It’s not clear how that’s going to change unless all people on all sides of this issue can agree that everyone has a right to be informed.