UK privacy activist to appeal after facial recognition case fails

Ed Bridges, whose face was scanned without consent while shopping and again at a protest, to challenge landmark ruling.

Facial recognition reuters
The facial recognition technology has sparked privacy fears around the world [Thomas Peter/Reuters]

British privacy activist Ed Bridges is set to appeal a landmark ruling that endorses the “sinister” use of facial recognition technology by the police to hunt for suspects.

In what is believed to be the world’s first case of its kind, Bridges told the High Court in Wales that the local police breached his rights by scanning his face without consent.

“This sinister technology undermines our privacy and I will continue to fight against its unlawful use to ensure our rights are protected and we are free from disproportionate government surveillance,” Bridges said in a statement.

But judges said the police’s use of facial recognition technology was lawful and legally justified.

Civil rights group Liberty, which represented 36-year-old Bridges, said it would appeal the “disappointing” decision, while police chiefs said they understood the fears of the public.

“I recognise that the use of artificial intelligence and face-matching technologies around the world is of great interest and at times, concern,” chief constable Matt Jukes said in a statement.

190523173337953

“With the benefit of this judgment, we will continue to explore how to ensure the ongoing fairness and transparency of our approach.”

Scanned without consent

Bridges was scanned by a vehicle-mounted Automatic Facial Recognition (AFR) camera while Christmas shopping in South Wales in 2017.

The following year, he was again scanned at a protest against the arms trade.

“On that occasion, the van was parked opposite the crowd,” Bridges told Al Jazeera in May, when the judicial review was being heard in Cardiff. “I felt it was there to sort of intimidate people and dissuade them from using their peaceful right to protest.”

His lawyer argued that recording people’s facial biometric data without consent – or grounds for criminal suspicion – violated rights to privacy, as well as equality and data protection laws.

South Wales Police was the first service in Britain to use the technology, deploying mobile cameras to check passersby against a database of offenders at dozens of locations, including football matches and rock concerts, according to the Thomson-Reuters Foundation.

An identified suspect can be stopped on the spot while others are not identified and their data is discarded, said the police.

The technology has been banned in the US city of San Francisco, and a trial operation of Automatic Facial Recognition in London has now ended, a Metropolitan Police spokesperson told Al Jazeera, but not before a man was fined for disorderly conduct after covering his face while passing a police van in the British capital.

Biometric harvesting

Facial recognition technology falls into two main streams – hardware and software. While cameras with hard-wired technologies can be expensive though fast to identify or catalogue faces, software solutions which scan live images fed from existing cameras are becoming more popular.

190822164833919

The UK‘s information commissioner is already studying the increasing use of the technology in London after it was revealed that a private property developer was using facial recognition to track tens of thousands of people in and around the central King’s Cross area.

The non-consensual harvesting of biometric data is already outlawed under EU data protection regulations known as GDPR.

But now the EU wants new laws that “should set a world-standard for AI regulation”, according to a document seen by the Financial Times newspaper.

Most Americans trust police

Across the Atlantic, a majority of Americans trust police to use facial recognition technology responsibly, but fewer are comfortable about its deployment by the private sector, a poll showed on Thursday.

The Pew Research Center survey found that the US adults had confidence in law enforcement using the artificial intelligence systems by a 56-39 percent margin, and a larger majority endorsed the use of the technology to assess security threats in public spaces.

But trust levels are markedly lower for private entities using facial recognition, with only one in three saying they trust technology firms. Just 18 percent said they believed that the advertisers would use facial recognition responsibly.

190626185720176

Some 36 percent said it would be acceptable for these tools to track who is entering or leaving apartment buildings; 30 percent approved the monitoring of attendance of employees at a place of business; and 15 percent endorsed the idea of seeing how people respond to public advertising displays in real-time.

The results varied by age, political affiliation and racial or ethnic background.

Roughly 60 percent of white people, but only 43 percent of black people, said they trusted the police to use facial recognition responsibly.

Republicans were more trusting than Democrats and older adults more than their younger counterparts.

The poll came amid growing concerns, with some researchers warning of errors, notably in identifying minorities, and of the creation of large databases which could be breached or hacked.

The poll surveyed 4,272 US adults between June 3 and 17.

Source: Al Jazeera, News Agencies