Enlarge /. A close-up of a police facial recognition camera in use at the Cardiff City Stadium in Cardiff, Wales on January 12, 2020. Police used the technology to identify people who had been given no football instructions to prevent disruption. Critics argued that the use of such technologies was invasive and discriminatory.
Privacy advocates in the UK are calling for victory as an appeals court ruled today that the country's police force's use of facial recognition technology is "fundamentally flawed" and violates multiple laws.
South Wales Police started using automated facial recognition technology on a trial basis in 2017 and openly deployed a system called AFR Locate at several dozen major events, such as football matches. The police compared the scans with the watch lists of known people to identify people who were wanted by the police, had open warrants for their arrest, or were otherwise of interest.
In 2019, Cardiff-based Ed Bridges filed a lawsuit against police, claiming that scanning his face in 2017 and 2018 was in violation of his legal rights. Despite being supported by British civil rights group Liberty, Bridges lost his lawsuit in 2019, but the appeals court overturned that decision today, ruling that the South Wales Police's facial recognition program was illegal.
"At the moment, the individual police officers have too much discretion," the court ruled. "It is not clear who can be put on the watchlist, nor is it clear that there are criteria to determine where AFR can be used." Police did not adequately investigate whether the software used had a racist or gender bias, the court added.
South Wales Police released data in 2018 showing that about 2,300 out of nearly 2,500 matches – about 92 percent – of software created at an event in 2017 were false positives.
"I am delighted that the court has agreed that facial recognition clearly endangers our rights," said Bridges in a written statement following the ruling. "This technology is an intrusive and discriminatory mass surveillance tool … we should all be able to use our public space without being subjected to oppressive surveillance."
The ruling did not completely ban the use of facial recognition technology in the UK, but it does limit what is allowed and what law enforcement agencies must do to ensure compliance with human rights law.
"I am confident this is a ruling we can work with," said a South Wales police spokesman, confirming that the agency has no plans to challenge the ruling.
Other police officers in the UK using facial recognition technology must meet the standard set out in today's decision. This includes the Metropolitan Police in London, which deployed a similar system earlier this year.
Liberty hailed the win as "the world's first legal challenge" to police use of facial recognition technology, but it almost certainly won't be the last. Police use of facial recognition technology here in the US has been intensified in the context of this year's nationwide civil rights protest movement in support of black communities and against police brutality.
The ACLU filed a formal complaint against the Detroit police force in June, though it was not a lawsuit, after arresting the wrong man for false positive play from a facial ID system. This system, the Detroit Police Chief later admitted, misidentifies suspects a whopping 96 percent of the time.
The US companies that make facial recognition systems have also tried to distance themselves from the police in recent months. IBM completely left the business in June. CEO Arvind Krishna said at the time, "Providers and users of AI systems have a shared responsibility to ensure that AI is tested for bias, especially when used in law enforcement agencies, and that such bias tests are reviewed and reported." A few days later, Amazon followed suit with a one-year moratorium on police permission to use the Rekognition facial recognition platform.