Enlarge /. Protesters march on a street during a protest against the police brutality and the death of George Floyd on May 31, 2020 in Miami, Florida.
Law enforcement agencies in several cities, including New York and Miami, have reportedly used controversial facial recognition software to track down and arrest people allegedly involved in criminal activities months later during the Black Lives Matter protests.
Miami police used Clearview AI to identify and arrest a woman who allegedly threw a stone at a police officer during a protest in May, local NBC subsidiary WTVJ reported this week. The agency has a policy against the use of facial recognition technology to monitor individuals engaged in "constitutionally protected activities" such as protests, the report said.
"If someone protests peacefully and does not commit a crime, we cannot use it against them," Armando Aguilar, Miami deputy police chief, told NBC6. But Aguilar added, "We have used the technology to identify violent protesters who attacked police officers, damaged police property, and set property on fire. We have made multiple arrests in these cases and more arrests will be made in the near future." ""
A lawyer representing the woman said he had no idea how police identified his client until he was contacted by reporters. "We don't know where you got the picture from," he told NBC6. "So how or where did you get your image from, asks for other privacy rights. Did you search your social media? How did you get access to your social media?"
Similar reports have surfaced from across the country in the past few weeks. Police in Columbia, South Carolina and the surrounding county also used facial recognition, albeit from a different provider, to retrospectively arrest several protesters, according to local newspaper The State. Philadelphia investigators also used third-party facial recognition software to identify protesters based on photos posted on Instagram, The Philadelphia Inquirer reported.
New York Mayor Bill de Blasio promised Monday that the NYPD "would be very careful and very limited in using anything that involves facial recognition," Gothamist reported. That statement followed an incident earlier this month when "dozens of NYPD officials – accompanied by police dogs, drones, and helicopters" came into the home of a Manhattan activist identified as a person by an "artificial intelligence tool" allegedly used a megaphone to shout in an officer's ear during a protest in June.
The ongoing nationwide protests, aimed at drawing attention to systemic racial differences in policing, have drawn more attention to the use of facial recognition systems by the police in general.
Repeated tests and studies have shown that most of the facial recognition algorithms in use today are significantly more likely to generate false positives or other errors when trying to match images with people of color. Late last year, the National Institute for Standards and Technology (NIST) published research that found that the facial recognition systems it tested had the highest accuracy in identifying white men, but were 10 to 100 times more likely to make mistakes in blacks, Asians or Native Americans made faces.
There's one more wrinkle, especially in 2020, when it comes to matching photos of civil rights activists: NIST found in July that most facial recognition algorithms perform significantly worse when matching masked faces. A significant percentage of the millions of people who appeared for marches, rallies and demonstrations across the country this summer have worn masks to reduce the risk of COVID-19 transmission in large crowds.
The ACLU filed a complaint against the Detroit Police Department in June alleging the division arrested the wrong man for a buggy, incomplete game provided by facial recognition software. Following the ACLU's complaint, Detroit police chief James Craig admitted that the software his agency used incorrectly identified suspects 96 percent of the time.
IBM said goodbye to the facial recognition business in June. The company also called on Congress to pass laws requiring vendors and users to test their systems for racial prejudice and have such tests checked and reported. Amazon reiterated its call for Congress to pass a law, calling on police to take a year off from using its Rekognition product in the hopes that Congress will act by next summer.
Clearview in particular – used in Miami – is hugely controversial for reasons beyond the potential for bias. A January report in the New York Times found the top-secret startup basically scoured the entire internet for images to fill its database with faces. Facebook, YouTube, Twitter, Microsoft, and other companies almost all sent Clearview commands to stop within days of the report's release. However, the company still has roughly 3 billion images available to partners (mostly, but not exclusively, law enforcement agencies) to compare images of people with.
The company is facing multiple lawsuits from states and the ACLU as individuals seek class action status.