Automated facial recognition technology wrong in 98% of cases, says new report

Automated facial recognition technology wrong in 98% of cases, says new report
© Endstation Jetzt

A new report on the automated facial recognition technology used by police to identify criminals shows that the wrong people were identified in 98% of UK cases.

The report, by privacy organisation Big Brother Watch, suggests that in only 2% of cases in which automated facial recognition technology was used were the right people identified, while in 98% of cases the technology instead wrongly identified innocent members of the public.

While the technology is increasingly in use across police services across the UK, the Metropolitan Police has made no arrests using facial recognition. Big Brother Watch has expressed concern that police in Wales have staged interventions with 31 innocent members of the public who were incorrectly identified.

What’s more, the UK’s police forces store the images of all of the people incorrectly matched by automated facial recognition technology, which has also raised concerns about the storage of biometric images of thousands of innocent members of the public.

How is the technology supposed to work?

Automated facial recognition technology identifies unique facial characteristics, which can then be measured and matched against biometric data. The technology is often linked to surveillance and CCTV cameras, which are able to scan public spaces and crowds, comparing faces to a database in order to identify people in real time.

Big Brother Watch warns that there is no legal basis or grounding for the use of the technology, meaning that there is no legislation or regulation governing facial recognition. Because of this, the organisation has called on UK police forces to immediately cease using facial recognition.

What has the UK government said?

The UK government’s Surveillance Camera Commissioner, Tony Porter, responded to the report by emphasising the potential of automated facial recognition technology to protect the general public from changing threats. While the technology may be imperfect now, it is continually being developed as it is more broadly implemented.

Porter said: “It is inescapable that automated facial recognition capabilities can be an aid to public safety particularly from terrorist threats in crowded or highly populated places.” He added that, in his view, the public is not concerned about the intrusiveness of the technology because it is in widespread use outside of law enforcement contexts.

However, he called on the government to establish a “clear framework of legitimacy and transparency which guides the state, holds it to account and delivers confidence and security amongst the public.”

  • LinkedIn
  • Twitter
  • Facebook

LEAVE A REPLY

Please enter your comment!
Please enter your name here