2 Sources
2 Sources
[1]
West Midlands Police earn red card over Copilot own goal
Parliament committee finds AI BS helped shape a real-world decision UK Parliament has delivered the official postmortem on West Midlands Police's Copilot saga, and it reads like a case study in how not to mix generative AI with public order decision-making. MPs on the Home Affairs Committee have laid out their findings on how West Midlands Police handled the November Aston Villa fixture that saw Maccabi Tel Aviv supporters barred. The force's decision leaned in part on Copilot-generated claims about disorder at a supposed West Ham match, a fixture that existed only in the chatbot's imagination but still found its way into briefing materials. The report lays out how that duff information managed to travel further up the chain than it ever should have. MPs say claims about the fictional West Ham game ended up shaping how risk was viewed, underlining that the real problem was not just the hallucination itself but how easily it was taken at face value. The committee stops short of accusing former chief constable Craig Guildford of deliberately misleading Parliament, noting that he was not told before his evidence session on January 6 that AI had been used to generate the incorrect material. However, MPs say that by that point, the use of AI had already been disclosed internally, making it reasonable to expect that Guildford and assistant chief constable Matt O'Hara would have been properly briefed before appearing. MPs say Guildford showed a remarkable lack of professional curiosity by failing to properly check the evidence before facing them, adding that getting the facts wrong twice points to wider due diligence failings rather than a one-off mistake. The report says it should not have taken two oral evidence sessions and a written correction to reach an accurate account, and warns that the episode raises serious questions about transparency and attention to detail within the force. Guildford had told the committee that officers had not used AI to find the material, only to later correct the record in writing. Following criticism from Home Secretary Shabana Mahmood and others, Guildford retired at 52, and the acting chief constable moved to switch Copilot off across the organization while investigators worked out what had happened. Looking ahead, MPs say the force needs to rebuild transparency and be far more careful about what it treats as intelligence. All of this lands at an awkward moment for policymakers. In a white paper published last month, the government set out plans to ramp up the use of AI across policing, including £115 million over the next three years for a new National Centre for AI in Policing known as Police.AI, initially focused on automating administrative work. ®
[2]
Police still using AI tool despite inaccurate evidence in Israeli football fan ban
At least 21 police forces across England are still using Copilot AI despite West Midlands Police (WMP) blocking Microsoft's tool after inaccurate evidence formed a decision to ban Israeli football fans, Sky News can reveal. The Birmingham force turned off access to the software after admitting, following initial denials, that a Copilot "hallucination" was responsible for a match that never happened being included in an intelligence document justifying excluding Maccabi Tel Aviv fans from Aston Villa in November. And, at the weekend, MPs on the Home Affairs Select Committee highlighted fresh concerns about Copilot after saying it produced inaccurate key claims about past disorder around a contentious Maccabi match in Amsterdam in 2024. Microsoft told Sky News it "continuously evaluates" Copilot and urges companies to review how they are using it. Only eight of the forces across the UK who responded to our questions on their AI policy told us Copilot could not be used in investigations - including police in Scotland and Northern Ireland. Our discovery that so many forces still allow officers to use Copilot reinforces a disjointed approach across the country and lack of coordinated policing. That is despite the Maccabi ban escalating into one of the biggest policing controversies last year, eventually leading to the WMP's chief constable, Craig Guildford, being forced out under government pressure. The National Police Chief's Council told Sky News it "is confident that the potential benefits of using AI outweigh the risks posed, provided we remain committed and vigilant in using it correctly, responsibly and securely". Their AI experts advise forces to use Copilot "in the most appropriate way" - leaving it open to local decisions to be taken. Greater Manchester Police, which is England's second-largest police force, defended the use of AI, telling Sky News: "We have a robust AI policy in place to help promote the use of such technology to speed up processes and ensure officers have more time to be on the streets rather than behind their desks." West Yorkshire Police said staff are provided with "education and guidance on how to use it responsibly, which should avoid any issues". But it has taken the Maccabi controversy to highlight concerns about how AI is being used and whether the technology has been robustly tested enough before being approved. 'Significant shortcomings' West Midlands Police and Crime Commissioner Simon Foster told Sky News: "I am concerned about the way in which WMP was utilising AI, not only in connection with this particular policing operation. "Because plainly there were some significant concerns, shortcomings, and failures around ensuring there was a proper regulatory management of the use of AI in connection to this particular police operation." It emerged senior West Midlands officers were not clear about how AI generated erroneous evidence - highlighting wider concerns about how the technology has been used as a time-saving tool despite the risks. The Home Affairs Select Committee found "proper due diligence was not applied". Mr Foster said: "We need to make sure that it is lawful, it is reasonable, it is ethically used and there's a proper regulatory regime in place to ensure that it's not misused, and it doesn't throw up rogue results." Read more from Sky News: Police investigating racist abuse of footballers Team GB return home after record-breaking Winter Olympics The police forces covering Northern Ireland and Scotland do not allow Copilot, while there are also blocks in place for North Wales and Dyfed-Powys forces. But Chris Todd, chair of the National Police Data and Analytics Board and Humberside Police chief constable, insisted AI is "providing benefits to our communities" to join up data and reduce delays to stop criminals. He said: "In Humberside Police we comply with the position of the National Police Chief's Council Artificial Intelligence Portfolio, which has outlined their confidence in that the potential benefits of using AI outweigh the risks posed, provided we remain committed and vigilant in using it correctly, responsibly and securely. "We echo that it should be used to support human decisions, not make them for us." Among those with a more cautious approach is the Cleveland force which doesn't block Copilot but insists "the force doesn't use AI to form intelligence or to assist with investigations". Police Scotland has been running a trial with Copilot since October involving a "limited number of police officers and staff" as they balance "ethical and human rights considerations" with duties to keep people safe. The force said: "The trial does not involve any operational policing processes, and instead focuses on efficiencies in corporate processes, such as improving the retrieval of information across existing HR policies." Microsoft defended its software and pointed to differences between the 365 Copilot service for workplaces and the free Copilot consumer chat service for general use online. Some police using Copilot admitted they use the chat product. A Microsoft spokesperson said in a statement: "Microsoft 365 Copilot is grounded in an organisation's own data, security, and access controls, works only with information a user already has permission to access, and provides citations, so sources can be reviewed and verified. "We continuously evaluate and improve our services and encourage organisations to use Copilot within their own governance and review practices."
Share
Share
Copy Link
A UK Parliament committee found West Midlands Police relied on AI-generated hallucinations about a fictional football match to ban Israeli fans, exposing critical gaps in AI governance. Despite this, 21 police forces still use Copilot with no coordinated oversight, raising urgent questions about responsible implementation of AI in public order decisions.
The UK Parliament's Home Affairs Committee has released a damning assessment of how West Midlands Police integrated AI in policing operations, specifically regarding the controversial November ban on Maccabi Tel Aviv supporters attending an Aston Villa fixture. The committee's findings reveal that Copilot, Microsoft's AI tool, generated inaccurate evidence about disorder at a fictional West Ham match that never occurred, yet this AI-generated hallucination made its way into an intelligence document that shaped the public order decision to bar fans
1
.
Source: Sky News
The Home Affairs Committee findings show the problem extended beyond the chatbot's fabrication itself. MPs emphasized that the real failure lay in how easily the false information was accepted without verification, pointing to significant due diligence failings within the force. The report states it should not have required two oral evidence sessions and a written correction to establish an accurate account, signaling broader issues with transparency and attention to detail
1
.Former chief constable Craig Guildford initially told the committee that officers had not used AI to find the material, only to later correct the record in writing. While MPs stopped short of accusing Guildford of deliberately misleading Parliament, they noted he was not briefed before his January 6 evidence session that AI had generated the incorrect material, despite the AI use being disclosed internally by that point. The committee criticized Guildford for showing a "remarkable lack of professional curiosity" by failing to properly verify evidence before appearing, with MPs stating that getting facts wrong twice indicated systemic problems rather than isolated errors
1
.Following criticism from Home Secretary Shabana Mahmood and mounting pressure, Guildford retired at 52. The acting chief constable subsequently switched Copilot off across West Midlands Police while investigators examined what had transpired
1
.
Source: The Register
Despite the wrongful ban of football fans and the subsequent scandal, Sky News revealed that at least 21 police forces across England continue using Copilot, exposing troubling inconsistencies in AI policy and lack of coordinated oversight
2
. Only eight forces responding to inquiries stated Copilot could not be used in investigations, including police in Scotland and Northern Ireland2
.The National Police Chief's Council told Sky News it "is confident that the potential benefits of using AI outweigh the risks posed, provided we remain committed and vigilant in using it correctly, responsibly and securely." However, their guidance to use Copilot "in the most appropriate way" leaves decisions to local forces, creating a fragmented approach to regulatory oversight
2
.Greater Manchester Police, England's second-largest force, defended its use, stating it has "a robust AI policy in place to help promote the use of such technology to speed up processes and ensure officers have more time to be on the streets." West Yorkshire Police said staff receive "education and guidance on how to use it responsibly, which should avoid any issues"
2
.Related Stories
West Midlands Police and Crime Commissioner Simon Foster expressed concern about how the force was utilizing AI beyond this particular operation, noting "significant concerns, shortcomings, and failures around ensuring there was a proper regulatory management." Foster emphasized the need to ensure AI use is lawful, reasonable, and ethically deployed with proper governance to prevent misuse and erroneous results
2
.The controversy has exposed how senior officers were unclear about how AI tool generated inaccurate evidence, highlighting broader concerns about technology deployed as a time-saving measure despite inherent risk. Chris Todd, chair of the National Police Data and Analytics Board, insisted AI should be used "to support human decisions, not make them for us," while acknowledging efficiency benefits
2
.Microsoft stated it "continuously evaluates" Copilot and urges companies to review their usage
2
.The scandal arrives at a critical juncture for policymakers. Last month, the government published a white paper outlining plans to expand AI in policing, including £115 million over three years for a new National Centre for AI in Policing called Police.AI, initially focused on automating administrative work
1
. The timing raises questions about whether adequate safeguards exist before scaling AI deployment across law enforcement.The incident serves as a stark reminder that while AI offers potential to reduce administrative burden and improve data analysis, its integration requires rigorous accountability mechanisms, clear governance frameworks, and human rights considerations. As forces balance operational efficiency with public trust, the West Midlands case underscores that transparency and verification protocols cannot be optional extras but must form the foundation of any AI deployment in policing operations.
Summarized by
Navi
[1]
14 Jan 2026•Policy and Regulation

13 Dec 2024•Policy and Regulation

26 Aug 2024

1
Technology

2
Policy and Regulation

3
Policy and Regulation
