People protest on the street ahead of a protest against the police at a location they call the City Hall Autonomous Zone in support of Black Lives Matter, in the Manhattan neighborhood of New York City, New York, United States, on 30, 2020 .
Carlo Allegri | Reuters
Last June, three of the biggest names in facial recognition technology self-imposed their sale to the police after pressure from civil rights activists and national protests sparked by the murder of George Floyd.
But after a year of public debates on the state of the police force in America, there has been almost no progress in regulating facial recognition.
This means that companies like Amazon and Microsoft that have issued moratoriums to give Congress time to set fair traffic rules have remained in limbo. IBM, however, announced that it would leave the business completely.
In the year since these tech companies paused facial recognition, lawmakers are still wrestling with properly regulating the technology at the state and federal levels. A coalition of Democrats has urged the government to completely halt the use of technology until it can find better rules. So far, most of the actions have taken place in a handful of states.
Privacy and civil liberties advocates say they see corporate moratoriums as a promising first step, but they also remain cautious about other worrying forms of surveillance that technology companies continue to benefit from.
And while Amazon and others restricted sales of their facial recognition technology, police appear to have used similar tools during widespread protests against police brutality last summer, despite law enforcement agencies not disclosing their use.
The unique challenge of face recognition
Facial recognition poses unique risks for citizens, say data protection officers, even when compared to on-site police surveillance.
"With most digital surveillance, the difference is not that these types of analog activities are more judicially supervised, but the cost," said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project (STOP). While it takes a huge investment of time and money to covertly track someone, it's cheap and quick to create fake social media pages to keep an eye on people, Cahn said.
Matt Mahmoudi, researcher and advisor on artificial intelligence and human rights at Amnesty International, said another problem lies in the way in which facial recognition can be used without the subject's knowledge.
"In a normal police line-up, you are aware that you are standing in a row," said Mahmoudi. “With facial recognition, you have no idea that you are in a virtual constellation. You can find yourself in a virtual list at any time. "
Feeling that facial recognition could be deployed anytime – and the lack of transparency about how law enforcement agencies use the technology – could affect speech and freedom of expression, activists fear.
Face detection grid
Steger photo | Peter Arnold | Getty Images
The potential threat from such tools is particularly noticeable for blacks and browns. Face recognition tools have historically been less accurate at identification, in part because the algorithms tend to be trained on records that are white and masculine distorting.
Research has shown that facial recognition software can contain accidental racial and gender biases. In 2018, MIT computer scientist Joy Buolamwini and renowned AI researcher Timnit Gebru jointly wrote a groundbreaking paper that shows that the facial recognition systems from IBM and Microsoft were significantly worse at identifying people with darker skin color.
In addition, studies by the American Civil Liberties Union and M.I.T. found that Amazon's Rekognition technology was more likely to misidentify women and people of color than white men.
Proponents of facial recognition technology, including Amazon, have argued that it can help law enforcement agencies track down suspected criminals and reunite missing children with their families. Amazon also denied the ACLU and M.I.T. Studies arguing that researchers have used Rekognition differently than law enforcement agencies recommend using the software.
Rep. Bobby Rush, D-Ill., Himself an activist who joined the Student Nonviolent Coordinating Committee during the 1960s Civil Rights Movement and co-founded the Illinois Chapter of the Black Panther Party, raised concerns about the bias of the technology and supported a federal moratorium on its Use.
"It's been a generation long, I think you'd call it a trope in the black community that all blacks look the same," Rush said in an interview with CNBC. "Technically, with the advent of facial recognition technology, this trope became a reality."
Tech companies are still monetizing surveillance
Amazon, Microsoft, and IBM have put massive restrictions on the sale of facial recognition tools to the police, but law enforcement still has a plethora of surveillance tools at their disposal.
Microsoft has played a huge role in helping police surveillance outside of facial recognition. The company developed the Domain Awareness System in collaboration with the New York Police Department, according to the department's website. The system is billed as a "crime and counter-terrorism tool" that uses "the world's largest networks of cameras, license plate readers and radiological sensors". Microsoft has not commented on the DAS or provided additional information.
The smart home security subsidiary of Amazon, Ring, was also scrutinized intensively by data protection officers because of its rapidly growing cooperation with the police. Since 2018, Ring has more than 2,100 partnerships with police and fire departments that give them access to video footage captured by its users' internet-connected cameras. Video clips are requested through Ring's social media-like community security app called Neighbors, which allows users to upload and comment on recorded footage and discuss what's going on around them.
Ring does not announce the sale of its products, but a letter to lawmakers last January stated, "There are millions of customers who have bought a Ring device."
As Ring's police partnerships have grown, privacy attorneys have expressed concern that the program and Ring's companion Neighbors app have turned residents into informants, while allowing police access to footage like them with no warrant and few guard rails Material can use.
Ring has argued that it creates "safer, more connected communities". Amazon claimed in 2018 that Ring's video doorbell product reduced neighborhood break-ins by up to 55%, although recent research by NBC News and CNET found there was little evidence to support this claim.
Ring's partnerships with public safety agencies have only grown in the year since Amazon suspended sales of Rekognition to the police. The company has announced 468 new partnerships with law enforcement agencies since June 10, 2020, Ring public records show.
In the latest sign of just how much the program has expanded, all 50 U.S. states now have police or fire departments participating in Amazon's ring network, according to data from the company's active agency card.
Following Amazon's moratorium on recognition and amid global protests against police violence, civil rights and human rights groups seized the moment to urge Ring to end its partnerships with the police. At the time, the Electronic Frontier Foundation argued that Amazon's expressions of solidarity with the black community sounded hollow as Ring works with the police and provides them with tools that proponents fear will heighten racial profiling of minorities.
Ring told CNBC in a statement that the company will not tolerate racial profiling and hate speech in content shared from Ring devices and on the Neighbors app.
Privacy officers who spoke to CNBC said they believe ring doorbells and rekognition raise similar concerns, as both products add to a heightened network of police surveillance.
"(Amazon is) working hard to monetize surveillance technology and snuggle up to law enforcement agencies to make them profitable for themselves," said Nathan Freed Wessler, a senior attorney for the ACLU's Speech, Privacy and Technology Project. "Ring is less of a concern than facial recognition in some ways, but it is really worrying as they basically place tiny surveillance cameras in residential neighborhoods across the country and provide a very efficient way for police to get access to that footage to law enforcement agencies only a huge wealth of videos of people living their lives that they never had access to before. "
The police require approval to gain access to Ring's footage. This process became more transparent with an update from Ring last week, according to which the police and fire brigade have to make requests for video material from users via public posts in the Neighbors app. Previously, agencies could privately email users to request videos. Users can also opt out of viewing public safety posts in the Neighbors app.
Ring said the footage can be a valuable tool in helping police investigate crimes such as package theft, break-ins and trespassing. Proponents and lawmakers fear, however, that ring devices will lead to increased surveillance and racial profiling.
In February, the Electronic Frontier Foundation received emails from Los Angeles police showing the department requested access to Ring footage during protests against Black Lives Matter last summer. The EFF called it "the first documented evidence that a police agency specifically requested footage from networked home surveillance devices in connection with last summer's political activity".
"The LAPD's & # 39; Safe L.A. Task Force & # 39; seeks your help," said LAPD Detective Gerry Chamberlain in an email. "During the recent protests, people have been injured and property looted, damaged and destroyed. In order to identify those responsible, we ask that you submit copies of any videos you may have (redacted) for."
Ring said his guidelines prohibit public safety officials from filing video requests for protests and other lawful activities. The company added that Ring requires that all police requests for videos on the Neighbors app include a valid case number for active investigation, as well as details about the incidents.
Proponents of privacy and civil liberties fear not only that home surveillance devices like Ring could lead to increased surveillance of protesters, but that Ring footage could be used in conjunction with other technologies like facial recognition to help police identify people quickly and easily.
Law enforcement agencies are not prohibited from distributing Ring footage to anyone. Amazon told lawmakers in 2019 that police who download Ring footage can keep the videos forever and share them with anyone, even if the video contains no evidence of a crime, the Washington Post reported.
"Once the police get this footage and are in one of the many cities that haven't yet banned face recognition, they can take ring footage and then use another company's facial recognition system to identify a person or anyone else who walks by." "Wessler said." There's nothing technologically stopping you from running each face through the system to try to identify people.
For its part, Ring said last August that it does not use facial recognition technology in its devices or services and would not sell or offer the technology to law enforcement agencies.
Face recognition and protests
Last summer, data protection officers warned of the dystopian ways in which racial justice protesters could be tracked and identified by police. Articles on how to disguise faces with makeup and masks and protect smartphones from sending detailed location information in progressive circles.
A year later there were a handful of reports about how facial recognition and other surveillance technologies might have been used on protesters. But activists say the information released about protest surveillance barely scratches the surface of law enforcement capabilities – and that's part of the problem.
In many cases, law enforcement agencies are not required to disclose information about how they are monitoring citizens. It was only last June, amid the protests, that New York lawmakers passed a law requiring police to disclose how they use surveillance technology in public. In filing a lawsuit over the NYPD's failure to disclose its use of facial recognition, STOP found that the department's facial recognition division handled over 22,000 cases over three years, although little else was known.
"It was like being in the dark for a bit," said Amnesty International's Mahmoudi.
In a high-profile case last summer, the NYPD appeared to be using facial recognition to track down the protester from Black Lives Matter Derrick "Dwreck" Ingram on an attempted arrest that led to an hour-long standoff when Ingram refused to enter officers without his home to leave an arrest warrant. Ingram live streamed the ordeal on social media when dozens of officers reportedly lined his block and a police helicopter flew over him. The police finally left, and he turned himself in the next day.
In a statement to CNBC, an NYPD spokesman said police were responding to an open complaint that Ingram allegedly attacked a police officer almost two months earlier during a demonstration by yelling a megaphone in an officer’s ear. Ingram has denied the NYPD assault allegations, and the charges were ultimately dismissed.
Ingram said he was "stunned" and "shocked" when he learned that facial recognition tools appeared to be involved in his investigation. A spokesman for the NYPD's Assistant Commissioner for Public Information Sergeant Jessica McRorie made no comment on whether the tools were used in his case, but said the NYPD "uses facial recognition as a limited investigative tool" and a match is not a likely cause would for an arrest.
While the protests against the assassination of George Floyd continue, the police are using powerful surveillance technology to track them down
Ingram's surprise was due in part to his mastery of surveillance tools after hosting sessions for other activists on how to protect themselves from surveillance by using encrypted apps, making their social media pages private, and other strategies. Even so, he didn't think he would be persecuted that way.
Now, when he educates other activists about surveillance, he understands that protesters like him could still be prosecuted if law enforcement wanted to.
"If the government, if the police want to monitor us, they will monitor you," he said. "My counter-argument is that we should use the same tools to prove the damage it is causing. We should do the research, we should fight the legislation and really tell stories like mine to make it public what is happening and really expose the system as it is. " a scam and how dangerous it really is. "
In the country's capital, law enforcement officials have revealed in court documents that they used facial recognition tools to identify a protester charged with assault. At the time, the police officer who ran the area's facial recognition program told the Washington Post that the tool would not be used in peaceful protests and would only be used for clues. A new law in Virginia restricting facial recognition by local law enforcement agencies will soon put an end to the facial recognition system, the Post later reported. The system was a pilot program operated in Maryland, Virginia and Washington, D.C. was used and required buy-in from each region.
Rep. Anna Eshoo, D-California, tried to learn more about how the federal government used surveillance tools during last summer's protests against racial justice, and urged authorities to restrict their use of such tools, but said she was from the Overwhelmed reaction from these agencies at this time.
"I got high-level responses, but very little detail," Eshoo said in an interview with CNBC. "What remains are many unanswered questions."
Officials from the agencies Eshoo wrote to – the Federal Bureau of Investigation, Drug Enforcement Administration, National Guard, and Customs and Border Protection – either did not respond or refused to comment on their responses or use facial recognition tools during protests.
Curbing facial recognition technology
The momentum for facial recognition laws seemed to have increased and decreased in the past year and a half. Prior to the pandemic, several data protection officers told CNBC that they had seen progress on such regulations.
But the public health crisis has reset priorities and possibly even changed the attitudes of some lawmakers and citizens about surveillance technologies. Soon, government agencies were discussing how to implement contact tracing on Americans' smartphones, and the widespread use of masks allayed concerns about the technology that could identify their faces.
The social movement following the police murder of Floyd renewed fears about facial recognition technology, and more specifically how law enforcement could use it to monitor protesters. Data protection officers and progressive legislators have warned of a deterrent effect on speech and freedom of expression should such surveillance go unchecked.
Lawmakers like Eshoo and Rush sent a spate of letters to law enforcement asking how they monitored protests and signed new laws like the Facial Recognition and Biometric Technology Moratorium Act. This bill would suspend the use of such technologies by federal agencies or officials without the approval of Congress.
In an interview with CNBC, Eshoo stressed that the moratorium was just that – not an outright ban, but an opportunity for Congress to impose stronger guard rails on the use of the product.
"The goal is that the technology is used responsibly," she said. "It can be a very useful and fair tool, but we don't have that now."
But, Eshoo said, things haven't been moving as quickly as she'd like them to.
"I'm not happy about where we are because I think the needle hasn't moved at all," she said.
There have been some changes at the state and local levels where lawmakers in Sommerville, Massachusetts, San Francisco, and Oakland, California decided to ban the use of facial recognition technology by their city authorities. California now has a three-year moratorium on the use of facial recognition technology in police body cameras. Last year, Portland, Oregon lawmakers passed one of the most sweeping bans on technology, and Washington state lawmakers chose to call for more guard rails and transparency about the government's use of the technology.
It could take more of those laws for Congress to finally take action, just as the rise of state digital privacy laws has increased the urgency of a federal standard (though, even in this case, lawmakers don't have to put a single bill together).
Still, many continue to call for a permanent ban on law enforcement and federal regulation on the use of the tools.
"While there are a lot of things going on at the state and local levels that are incredibly important, we need to get our federal government to actually get laws passed," said Arisha Hatch, campaign director at Color of Change.
Privacy attorneys also remain suspicious of industry-backed laws, as tech companies like Amazon and Microsoft have built strong lobbying presences in state capitals in the United States to help prepare facial recognition bills.
Microsoft CEO Satya Nadella (L) and Amazon CEO Jeff Bezos attend prior to a meeting of the American Technology Council of the White House in the State Dining Room of the White House on June 19, 2017 in Washington, DC.
Chip Somodevilla | Getty Images
There is concern that tech companies will push for state laws that will actually allow them to continue selling and benefiting from facial recognition with few guard rails.
Proponents point to the recently passed facial recognition bill in Washington state, sponsored by a Microsoft-employed senator, as a weak attempt to regulate the technology. Versions of Washington law have now been introduced in several states, including California, Maryland, South Dakota, and Idaho.
Groups like the American Civil Liberties Union argued the bill should have temporarily banned facial surveillance until the public can decide whether and how to use the technology. The ACLU also opposed the fact that Washington law makes it legal for government agencies to use facial recognition to deny citizens access to essential services such as "housing, health care, food and water" while those decisions are being made “Los defined 'meaningful human review',” the group said.
At the federal level, technology giants such as Amazon, IBM, Microsoft and Google have spoken out in favor of establishing rules for face recognition. But privacy advocates fear that companies are calling for weaker federal regulation that, if passed, could forestall stronger state laws.
"Any federal law that provides less than a total ban on police use of facial recognition technology must include a preemption clause," which means that federal law would not replace state laws that may be more restrictive on facial recognition technology. said Wessler from the ACLU.
Wessler added that any federal law on facial recognition must give individuals the right to sue entities such as law enforcement agencies who violate the law.
"Those are the two things that Amazon and Microsoft and the other companies want to avoid," said Wessler. "They want a weak law that basically gives them the excuse to say, 'We're a safe, regulated space now, so don't worry.'"
While federal technology containment laws may be in the books for a while, private sector decisions to restrict the use of their products, even if they are incomplete, could help. Several privacy advocates who are critical of the technology and companies who sell it agreed that the tool's use is severely limited.
"While it's great that Amazon took a hiatus and every other company took a hiatus, people are still developing this, and they're still developing it," said Beryl Lipton, an investigative researcher at the Electronic Frontier Foundation.
There is little transparency about how facial recognition software developed by big tech companies is used by the police. For example, Amazon hasn't disclosed which law enforcement agencies are using Rekognition or how many are using the technology. Additionally, when the company announced its year-long moratorium on the sale of facial recognition to police, the company declined to say whether the ban would apply to federal law enforcement agencies such as the Immigration and Customs Service, which was reportedly unveiled with the technology in 2018.
Large consumer brands like Amazon aren't the only ones developing this technology or considering adding it to their products. Lesser-known companies like facial recognition startup Clearview AI have only just gained public awareness for their work with law enforcement. Rank One Computing, another company that supplies facial recognition technology to the police force, made headlines last year after its facial recognition service mistakenly matched a Detroit man's driver's license photo with a surveillance video of a shoplifting, resulting in the first known US-based illegal arrest of technology.
That means it can be even more effective if a company that deals directly with law enforcement or relies heavily on the business of the industry to restrict their use of facial recognition. Police body cameras maker Axon said in 2019 that it would not use facial recognition technology for the time being after an independent research panel asking for advice recommended avoiding the technology mainly for ethical reasons. Lipton said the move felt like "meaningful action".
WATCH: Concerns about police use of facial recognition are growing