Skip to main contentSkip to navigationSkip to navigation
Notices displayed by the Metropolitan police notifying the public that automated facial recognition cameras had been activated during a trial in London in 2017
A police notice alerting the public to an equipment trial in London in 2017. Photograph: Mark Kerrison/Alamy
A police notice alerting the public to an equipment trial in London in 2017. Photograph: Mark Kerrison/Alamy

Facial recognition tech is arsenic in the water of democracy, says Liberty

This article is more than 4 years old

Human rights group calls on England and Wales to ban police use of AFR in public spaces

Automated facial recognition poses one of the greatest threats to individual freedom and should be banned from use in public spaces, according to the director of the campaign group Liberty.

Martha Spurrier, a human rights lawyer, said the technology had such fundamental problems that, despite police enthusiasm for the equipment, its use on the streets should not be permitted.

She said: “I don’t think it should ever be used. It is one of, if not the, greatest threats to individual freedom, partly because of the intimacy of the information it takes and hands to the state without your consent, and without even your knowledge, and partly because you don’t know what is done with that information.”

Police in England and Wales have used automated facial recognition (AFR) to scan crowds for suspected criminals in trials in city centres, at music festivals, sports events and elsewhere. The events, from a Remembrance Sunday commemoration at the Cenotaph to the Notting Hill festival and the Six Nations rugby, drew combined crowds in the millions.

San Francisco recently became the first US city to ban police and other agencies from using automated facial recognition, following widespread condemnation of China’s use of the technology to impose control over millions of Uighur Muslims in the western region of Xinjiang.

When deployed in public spaces, automated facial recognition units use a camera to record faces in a crowd. The images are then processed to create a biometric map of each person’s face, based on measurements of the distance between their eyes, nose, mouth and jaw. Each map is then checked against a “watchlist” containing the facial maps of suspected criminals.

Spurrier said: “I think it’s pretty salutary that the world capital of technology has just banned this technology. We should sit up and listen when San Francisco decides that they don’t want this on their streets.

“It goes far above and beyond what we already have, such as CCTV and stop-and-search. It takes us into uncharted invasive state surveillance territory where everyone is under surveillance. By its nature it is a mass surveillance tool.”

Quick Guide

How is facial recognition used around the world?

Show

China has embraced automated facial recognition on a dramatic scale. It is the first government known to use the technology for racial profiling with cameras screening hundreds of thousands of residents to identify and control Uighurs, a largely Muslim minority. In cities across the vast country, the technology checks the identities of students at schools gates and universities, keeps citizens in line by naming and shaming jaywalkers, and records the faces of people who use too much loo roll in public toilets.

In Russia, 5,000 cameras in Moscow are fitted with facial recognition software that takes live footage and checks it against police and passport databases for wanted people. Moscow’s NTechLab, which makes face recognition software, is rated as one of the best in the world.

Despite San Francisco’s decision to ban facial recognition technology, many US police forces are turning to the technology to help solve crimes. Rather than the mass surveillance that comes with live AFR, the technology takes an image of a suspect and checks it against those on a police database. Outside law enforcement, Atlanta airport has installed the technology to allow passengers to pass through checkin, security and on to their plane without having to fish out their passport.

In December, it emerged that Taylor Swift used facial recognition to screen fans for stalkers before a gig at the Rose Bowl. Fans were lured into a kiosk to watch rehearsal clips where a facial recognition camera checked their face against a database of hundreds of known Swift stalkers.

Was this helpful?

She said a lack of strong governance and oversight could allow the police to roll out live facial recognition by stealth, without a meaningful debate on whether the public wanted it or not. The technology was developing so fast, she said, that government was failing to keep up.

“There is a real sense of technological determinism that is often pushed by the big corporations, but also by law enforcement and by government, that it’s inevitable we’ll have this, so we should stop talking about why we shouldn’t have it,” she said.

“What San Francisco shows us is that we can have the moral imagination to say, sure, we can do that, but we don’t want it. It’s so important not to assume that security outweighs liberty at every turn.”

Liberty brought a landmark legal case against South Wales police last month challenging their use of the technology. It is supporting the Cardiff resident Ed Bridges, who claimed an invasion of privacy when an AFR unit captured and processed his facial features when he popped out for a sandwich in December 2017, and again at a peaceful protest against the arms trade. A verdict is expected in the coming weeks.

Three UK forces have used AFR in public spaces since 2014: the Metropolitan police, South Wales police and Leicester police. A Cardiff University review of the South Wales police trials, which were backed by £2m from the Home Office, found the system froze and crashed when faced with large crowds, and struggled with bad light and poor quality images. The force’s AFR units flagged up 2,900 possible suspects, but 2,755 were false positives. An upgrade of the software, provided by the Japanese company NEC, led to confirmed matches increasing from 3% to 26%.

The report described other technical problems with the system. Officers identified what they call “lambs”, people on the watchlist who were repeatedly matched to innocent members of the public. At Welsh rugby matches, for example, AFR flagged up one female suspect 10 times. It was wrong on every occasion.

There are also more insidious issues. The technology works better for white men than any other group, meaning women and black and minority ethnic people are more likely to be flagged up in error, and so stopped and asked to identify themselves. Watchlists are suspect too, and reflect the kinds of biases that lead to areas with large black populations being over-policed.

Spurrier said: “You can see a train of injustice where the existing, entrenched prejudice in our society is codified in technology and then played out in the real world in a way that further breaks down trust between the police and communities, and further alienates and isolates those communities.”

While technical flaws can potentially be fixed, Spurrier opposes live facial recognition on a fundamental level. Mass surveillance has a chilling effect that distorts public behaviour, she said, a concern also raised in a May report by the London policing ethics panel. It found that 38% of 16 to 24-year-olds would stay away from events using live facial recognition, with black and Asian people roughly twice as likely to do so than white people.

Spurrier said: “It doesn’t take a great deal of imagination to see how something like facial recognition eats into the fabric of society and distorts relationships that are really human and really essential to a thriving democracy.

“Little by little, across the country, in tiny but very significant ways, people will stop doing things. From a person saying I’m not going to go to that protest, I’m not going to pray at that mosque, or hang out with that person, or walk down that street.

“Once that is happening at scale, what you have is a mechanism of social control. When people lose faith that they can be in public space in that free way, you have put arsenic in the water of democracy and that’s not easy to come back from.”

The Met police said that after the London police ethics panel report the force was awaiting a second independent evaluation of its trials. “Our trial has come to an end so there are no plans to carry out any further deployments at this stage,” a spokesperson said, adding that the Met would then consider if and how to use the technology in the future.

Deputy chief constable Richard Lewis of South Wales police said the force had been cognisant of privacy concerns throughout its trials and understood it must be accountable and subject to “the highest levels of scrutiny”.

“We have sought to be proportionate, transparent and lawful in our use of AFR during the trial period and have worked with many stakeholders to develop our approach and deployments,” he said. “During this period we have made a significant number of arrests and brought numerous criminals to justice.”

More on this story

More on this story

  • Police to be able to run face recognition searches on 50m driving licence holders

  • MPs and peers call for ‘immediate stop’ to live facial recognition surveillance

  • Facial recognition could transform policing in same way as DNA, says Met chief

  • UK passport images database could be used to catch shoplifters

  • Revealed: Home Office secretly lobbied for facial recognition ‘spy’ company

  • TechScape: ‘Are you kidding, carjacking?’ – The problem with facial recognition in policing

  • Home Office secretly backs facial recognition technology to curb shoplifting

  • ‘We’ll just keep an eye on her’: Inside Britain’s retail centres where facial recognition cameras now spy on shoplifters

  • Police using live facial recognition at British Grand Prix

  • Campaigners urge London food banks to end use of face scans

Most viewed

Most viewed