Privacy campaigners are urging the UK government to introduce comprehensive legislation to regulate facial recognition technology, amid fears that a lack of oversight has turned Britain into a “wild west” for biometric surveillance.
A new report from the Ada Lovelace Institute, an independent authority on data ethics and artificial intelligence, has exposed major shortcomings in how biometric systems are governed.
The think tank is calling on Sir Keir Starmer’s administration to clarify the legal boundaries of facial recognition and establish an independent watchdog to enforce stricter rules.
The report, published on Thursday, highlights “significant gaps and fragmentation” in current biometrics regulation and warns of the growing risks posed by live facial recognition (LFR) – technology capable of matching faces against large databases in real time.
Millions of Faces Scanned, But Legal Framework in Disarray
According to data compiled by Liberty Investigates, nearly 5 million faces were scanned by UK police in 2023, resulting in over 600 arrests.
Facial recognition cameras are now appearing in high streets, stadiums and retail stores, raising alarm among privacy advocates.
However, the legality of these deployments remains questionable. A landmark 2020 Court of Appeal ruling found that South Wales Police’s use of facial recognition technology breached privacy and data protection laws.
“The fragmented approach to biometric governance makes it nearly impossible to assess whether police use of facial recognition is lawful,” the Ada Lovelace report said.
It also expressed concern over next-generation technologies that claim to detect emotions, intentions and even truthfulness—capabilities that outpace current regulatory safeguards.
Retailers Turning to Surveillance Amid Crime Spike
Retailers including Southern Co-op, Budgens, Sports Direct, and most recently Asda, have adopted facial recognition in response to a rise in shoplifting and assaults on staff.
While businesses insist the technology is only used to identify known offenders, concerns remain over its broad deployment in public-facing spaces.
Through Project Pegasus, police forces have access to private CCTV systems and can cross-reference footage with facial recognition databases—a collaboration critics say infringes on civil liberties and may lead to wrongful identification.
Pressure Mounts for Facial Recognition Regulation in the U
Sarah Simms of Privacy International warned that the UK’s “legislative void” leaves the public vulnerable to abuse and surveillance.
“Live facial recognition is deeply invasive and needs targeted legal safeguards,” she said.
Liberty’s Charlie Whelton echoed the sentiment, stating: “The UK is lagging far behind Europe and the US, where stronger restrictions on facial recognition are already in place.” The EU AI Act and several US states have implemented bans or severe limitations on real-time facial recognition use in public.
“We’re trying to manage 21st-century technology with 20th-century laws,” Whelton said.
Government Response and Next Steps
Responding to concerns, Policing Minister Dame Diana Johnson acknowledged the need for clearer regulation and said the government would “outline plans in the coming months.”
While Johnson praised the technology as “transformational for policing,” she also admitted there are “very legitimate concerns” that must be addressed.
The Home Office defended facial recognition as a valuable crime-fighting tool, saying it helps “identify offenders more swiftly and accurately.”
