By Kate Cox
August 11, 2020
Privacy advocates in the UK are claiming victory as an appeals court ruled today that police use of facial recognition technology in that country has “fundamental deficiencies” and violates several laws.
South Wales Police began using automated facial recognition technology on a trial basis in 2017, deploying a system called AFR Locate overtly at several-dozen major events such as soccer matches. Police matched the scans against watchlists of known individuals to identify persons who were wanted by the police, had open warrants against them, or were in some other way persons of interest.
US Navy Gets an Italian Accent
In 2019, Cardiff resident Ed Bridges filed suit against the police, alleging that having his face scanned in 2017 and 2018 was a violation of his legal rights. Although he was backed by UK civil rights organization Liberty, Bridges lost his suit in 2019, but the Court of Appeal today overturned that ruling, finding that the South Wales Police facial recognition program was unlawful.
“Too much discretion is currently left to individual police officers,” the court ruled. “It is not clear who can be placed on the watchlist, nor is it clear that there are any criteria for determining where AFR can be deployed.” The police did not sufficiently investigate if the software in use exhibited race or gender bias, the court added.
The South Wales Police in 2018 released data admitting that about 2,300 of nearly 2,500 matches—roughly 92 percent—the software made at an event in 2017 were false positives.
“I’m delighted that the Court has agreed that facial recognition clearly threatens our rights,” Bridges said in a written statement after the ruling. “This technology is an intrusive and discriminatory mass surveillance tool… We should all be able to use our public spaces without being subjected to oppressive surveillance.”
The ruling did not completely ban the use of facial recognition tech inside the UK, but it does narrow the scope of what is permissible and what law enforcement agencies have to do to be in compliance with human rights law.
“I am confident this is a judgment that we can work with,” a spokesperson for the South Wales Police said, confirming that the agency does not plan to challenge the ruling.
Widespread impact?
Other police inside the UK who deploy facial recognition technology will have to meet the standard set by today’s ruling. That includes the Metropolitan Police in London, who deployed a similar type of system earlier this year.
Liberty hailed the victory as “the world’s first legal challenge” to police use of facial recognition tech, but it’s almost certainly not going to be the last. Police use of facial recognition technology here in the United States has come under increased scrutiny against the backdrop of this year’s nationwide civil rights protest movement in support of Black communities and against police brutality.
The ACLU in June filed a formal complaint, though not a lawsuit, against police in Detroit after they arrested the wrong man based on a false-positive match from a facial ID system. That system, the Detroit police chief later admitted, misidentifies suspects a whopping 96 percent of the time.
The US companies that manufacture facial recognition systems have also tried to distance themselves from police in recent months. IBM left the business entirely in June; CEO Arvind Krishna said at the time, “vendors and users of AI systems have a shared responsibility to ensure that AI is tested for bias, particularly when used in law enforcement, and that such bias testing is audited and reported.” A few days later, Amazon followed suit with a one-year moratorium on allowing police to use its facial recognition platform, Rekognition.
* This article was automatically syndicated and expanded from Ars Technica.
Leave a comment: