Essex Police suspend live facial recognition cameras after study reveals racial bias concerns

3 Sources

Share

Essex Police has temporarily halted its live facial recognition technology after a Cambridge University study found it was statistically more likely to identify Black people than other ethnic groups. The force has worked with its algorithm software provider to update the system and now plans to resume deployments, even as the UK government pushes to expand LFR use from 10 to 50 vans nationwide.

Essex Police Halt Live Facial Recognition Amid Bias Findings

Essex Police has suspended deployment of live facial recognition cameras after a Cambridge University study revealed the technology exhibited racial bias in its identification patterns

1

. The research found the facial recognition system was statistically significantly more likely to correctly identify Black participants compared to other ethnic groups, raising questions about fairness in AI technologies in policing

2

. The force, which began using live facial recognition technology in summer 2024, commissioned two independent studies as part of its Public Sector Equality Duty obligations

1

.

Source: BBC

Source: BBC

Cambridge University Study Reveals Identification Rate Disparities

The Cambridge University study used 188 volunteers to act as members of the public during a controlled field experiment conducted during an actual police deployment

1

. Researchers found that at current operational settings, the facial recognition software correctly identified approximately half of the people on the watchlist who passed the cameras

2

. While incorrect identifications were extremely rare, the study observed concerning patterns. Of six false positive identifications, four involved Black individuals, despite Black subjects constituting only 536 out of 2,251 observations, or 23.8 per cent of the sample

1

. The system also proved more likely to correctly identify men than women, highlighting multiple dimensions of potential bias in the technology

3

.

Algorithm Update and Software Provider Collaboration

Following the findings, Essex Police worked directly with its algorithm software provider to review results and implement updates to address the identified bias

1

. The force noted that while one study indicated potential bias in the positive identification rate, a second study analyzing more than 40 deployments between August 2024 and February 2025 suggested no statistically relevant bias

2

. During this period, the system scanned approximately 1.3 million faces in public spaces, leading to 123 police interventions and 48 arrests—roughly one arrest for every 27,000 faces scanned

2

. There was one confirmed mistaken intervention. After revising policies and procedures, Essex Police expressed confidence in resuming deployments of live facial recognition as part of operations to trace and arrest wanted criminals, while committing to continued monitoring to ensure no bias against any section of the community

1

.

Government Plans to Expand LFR Despite Privacy Concerns

The suspension comes as the British government pushes to dramatically expand the use of live facial recognition across England and Wales. Earlier this year, the Home Office announced plans to increase LFR-equipped vans from 10 to 50, with deployments planned for town centers and high crime hotspots

1

. The government committed more than £26 million to a national facial recognition system and £11.6 million specifically for LFR capabilities

1

. Home Secretary Shabana Mahmood defended the expansion, noting that the Metropolitan Police had made 1,700 arrests using the technology

2

. By the end of last year, 13 forces were using live facial recognition technology. The Home Office maintains that images are immediately and automatically deleted if they don't match the watchlist, and that all deployments are targeted, intelligence-led, time-bound, and geographically limited.

Source: Sky News

Source: Sky News

Civil Liberties Groups Challenge Biased AI Surveillance

Campaign group Big Brother Watch strongly criticized the technology, calling it "authoritarian, inaccurate and ineffective in equal measure"

2

. Jake Hurfurt from the organization warned that LFR as a tool of general mass surveillance has no place in a democracy like Britain, emphasizing that if police use it, the public should expect it doesn't racially discriminate

2

. The group argued that AI surveillance that is experimental, untested, inaccurate, or potentially biased has no place on British streets

2

. The Information Commissioner's Office has also weighed in, stating that all forces should conduct routine bias testing to check for discriminatory outcomes arising from technology design, training data, or watchlist composition. Without such testing, the ICO warned of a real risk of unfairness, emphasizing the importance of proportionality, transparency, and oversight in deciding when to deploy the technology. Researchers noted that different systems and conditions could produce varying results, calling for more testing to build fuller understanding of the technology's performance and ethics and accuracy.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo