West Midlands Police Chief Resigns After Microsoft Copilot AI Hallucination Led to Fan Ban

10 Sources

Share

Chief Constable Craig Guildford admitted West Midlands Police used Microsoft Copilot to create a faulty intelligence report that banned Maccabi Tel Aviv fans from a match. The AI hallucinated a non-existent game between West Ham and Maccabi Tel Aviv. After denying AI use for weeks, Guildford resigned following the revelation that sensitive security decisions relied on unverified generative AI output.

West Midlands Police Admits Microsoft Copilot Generated Fictional Information

West Midlands Police has become embroiled in a controversy that highlights the risks of deploying generative AI tools for sensitive security decisions without proper oversight. Chief Constable Craig Guildford admitted in a January 12 letter to the Home Affairs Committee that Microsoft Copilot was responsible for an AI hallucination that appeared in a faulty intelligence report used to ban Israeli football fans from attending a match

1

. The report, which assessed security risks for a Europa League match between Aston Villa and Maccabi Tel Aviv on November 6, 2025, included a reference to a match between West Ham and Maccabi Tel Aviv that never occurred

2

. This fictional information generated by the AI assistant became a key piece of evidence used by Birmingham's Safety Advisory Group when deciding to prevent Maccabi Tel Aviv fans from attending the game.

Source: The Verge

Source: The Verge

Chief Constable Craig Guildford Repeatedly Denied AI Use Before Admission

The situation deteriorated significantly due to Guildford's handling of questions about how the erroneous information entered the report. When he appeared before Parliament in December 2025 and again on January 6, 2026, Guildford explicitly denied that his force used AI tools, stating "We do not use AI"

1

. He attributed the error first to "social media scraping" and later to a flawed Google search conducted by officers

2

. This pattern of misleading MPs over AI role continued until January 10, when Guildford discovered that Microsoft Copilot had in fact been used to generate the problematic match reference. In his apology letter to the Home Affairs Committee, he explained that both he and Assistant Chief Constable O'Hara "had, up until Friday afternoon, understood that the West Ham match had only been identified through the use of Google"

4

. The admission of this irresponsible use of AI sparked immediate calls for accountability.

Source: BBC

Source: BBC

Leadership Failure and Political Fallout Over Banning Football Fans

Home Secretary Shabana Mahmood delivered a scathing assessment of the incident in the House of Commons, declaring that Guildford "no longer has my confidence" and describing the episode as a "failure of leadership"

1

. She criticized the police for relying on information that was "exaggerated or simply untrue" and highlighted the confirmation bias that led to banning football fans despite questionable evidence. The decision to ban Israeli football fans had become politically charged, with some viewing it as discriminatory given that Islamic terror attacks, including an October 2 attack on a Manchester synagogue, represented a more serious security threat

1

. Conservative MPs joined calls for Guildford's resignation, with MP Nick Timothy expressing particular concern about officers using "a new, unreliable technology for sensitive purposes" without proper AI policy guidelines

1

.

Chief Constable Retires as Scrutiny Intensifies

Craig Guildford, 52, retired from his position as chief of England's third-largest police force on January 16, just days after admitting the Microsoft Copilot error

3

. He had been scheduled to meet with Simon Foster, the Police and Crime Commissioner for the West Midlands, on January 27, but his departure came before that meeting could take place. The incident adds to a growing list of cases where AI hallucination has caused real-world harm. In October, consulting firm Deloitte refunded A$440,000 to the Australian government after delivering a report filled with fabricated references generated by AI

3

. Generative AI tools have also produced fictional legal cases cited by lawyers in both the US and UK courts.

Source: ESPN

Source: ESPN

Broader Implications for AI Deployment in Critical Sectors

The West Midlands Police incident raises urgent questions about how organizations deploy AI tools without adequate safeguards. Microsoft markets Copilot as a productivity assistant across corporate and government sectors, and as of late last year, the U.S. House of Representatives had begun using the technology

5

. Microsoft itself warns at the bottom of its Copilot interface that "Copilot may make mistakes," yet the technology continues to spread rapidly

2

. Testing by The Verge found that Microsoft's AI assistant frequently "got things wrong" and "made stuff up"

2

. The West Midlands case demonstrates how misinformation generated by AI can influence high-stakes decisions when organizations lack proper verification protocols. The force reportedly had no AI policy in place, allowing officers to use the technology for intelligence gathering without training or oversight

1

. As tech companies push aggressive adoption of AI across all sectors, this incident serves as a stark warning about the need for accountability frameworks before deploying generative AI in contexts where errors can have serious consequences for public safety and civil liberties.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo