Curated by THEOUTPOST
On Mon, 27 Jan, 8:01 AM UTC
2 Sources
[1]
AI prototypes for UK welfare system dropped as officials lament 'false starts'
'Serious concerns' about DWP's use of AI to read correspondence from benefit claimant Ministers have shut down or dropped at least half a dozen artificial intelligence prototypes intended for the welfare system, the Guardian has learned, in a sign of the headwinds facing Keir Starmer's effort to increase government efficiency. Pilots of AI technology to enhance staff training, improve the service in jobcentres, speed up disability benefit payments and modernise communication systems are not being taken forward, freedom of information (FoI) requests reveal. Officials have internally admitted that ensuring AI systems are "scalable, reliable [and] thoroughly tested" are key challenges and say there have been many "frustrations and false starts". Not all trials would be expected to make it into regular use, but two of those now scrapped had been highlighted by the Department for Work and Pensions (DWP) in its latest annual report as examples of how it had "successfully tested multiple generative AI proofs of concept". A-cubed was intended to help staff steer jobseekers into work. Aigent was supposed to accelerate personal independence payments relied on by millions of people with disabilities. This month the prime minister declared "AI is the way ... to transform our public services" and wrote to all cabinet ministers "tasking them with driving AI adoption and growth ... and making that a top priority for their departments". "Unsuccessful pilots and trials aren't necessarily a cause for concern, as they offer an opportunity to improve, but these failures raise important questions for the government's approach to AI in the public sector," said Imogen Parker, associate director at the Ada Lovelace Institute, an independent research body focused on data and AI. "Are the right lessons being learned and acted upon, and does the reality of AI match the rhetoric?" No information about AI used by the DWP in the welfare system has yet been disclosed on the government algorithm transparency register, which has been a requirement across Whitehall for almost a year. Officials say the time spent on the pilot software is not wasted, as the technology could later appear as part of a system that is rolled out, and thorough testing is essential prior to rollouts. But the move illustrates the complexities of Labour's hope to deploy AI to revolutionise public services and increase economic productivity. This week Peter Kyle, the secretary of state for science, innovation and technology, announced a "blueprint for a modern digital government" and said his department "will put AI to work, speeding up our ability to deliver our Plan for Change, improve lives and drive growth". Writing in December after a year of running i.AI, the Whitehall AI incubator, its director, Laura Gilbert, admitted "there have been abundant blockers, frustrations and false starts", but said "if something fails, we try, try again and find another route to impact". She said that of 57 ideas tested, 11 made it to rollout in various stages of testing and scaling. She added it has been working with US AI firms including OpenAI, Anthropic, Google and Microsoft. DWP officials told tech companies at a private meeting in August that making sure "products are scalable, reliable [and] thoroughly tested" are key challenges in moving AI systems from proofs of concept [POC] to full use, according to meeting notes released under FoI. They showed that "approximately 9 POCs have so far been completed" and "one POC has gone live, one is in the process of going live". "It's encouraging that the public sector isn't taking a rigid or dogmatic approach to AI, particularly in welfare, where the risks of amplifying inequalities and causing real injustice are significant," said Parker. "Yet a lack of transparency remains a critical issue ... [It] should not depend on journalistic investigation - openness, evaluation, and learning must be central to the government's strategy." The DWP declined to comment on the specific reasons AI pilots were dropped, but said considerations can include technological maturity, business readiness, business value, and scalability. It said it rigorously tests how much value the technology provides to officials and the public and its value for money. A government spokesperson said: "Proof of concept projects are deliberately short, enabling new and innovative technologies to be explored and prototyped - not all projects are expected to become long-term, and the learning from them can be used in the future. "This aligns with our 'scan, pilot, scale' approach set out in the AI opportunities action plan - because we recognise the tremendous potential of AI to transform our public services and save taxpayers billions."
[2]
'Serious concerns' about DWP's use of AI to read correspondence from benefit claimants
AI prototypes for UK welfare system dropped as officials lament 'false starts' When your mailbag brims with 25,000 letters and emails every day, deciding which to answer first is daunting. When lurking within are pleas for help from some of the country's most vulnerable people, the stakes only get higher. That is the challenge facing the Department for Work and Pensions (DWP) as correspondence floods in from benefit applicants and claimants - of which there are more than 20 million, including pensioners, in the UK. The DWP thinks it may have found a solution in using artificial intelligence to read it all first - including handwritten missives. Human reading used to take weeks and could leave the most vulnerable people waiting for too long for help. But "white mail" is an AI that can do the same work in a day and supposedly prioritise the most vulnerable cases for officials to get to first. By implication, it deprioritises other people, so its accuracy and how it reaches its judgments count, but both matters remain opaque. Despite a ministerial mandate, it is one of numerous public sector algorithms yet to be logged on the transparency register for central government AIs. White mail has been piloted since at least 2023 when the then welfare secretary, Mel Stride, said it meant "those most in need can be more quickly directed to the relevant person who can help them". But documents released to the Guardian under the Freedom of Information Act show that benefit claimants are not told about its use. An internal data protection impact assessment said letter writers "do not need to know about their involvement in the initiative". The assessment says correspondence can include national insurance numbers, dates of birth, addresses, telephone details, email addresses, details of benefit claims, health information, bank account details, racial and sexual characteristics, and details on children such as their dates of birth and any special needs. People who work with benefit claimants are now voicing "serious concerns" about how the system handles sensitive personal data. Meagan Levin, the policy and public affairs manager at Turn2us, a charity which helps people facing financial insecurity, said the system "raises concerns, particularly around the lack of transparency and its handling of highly sensitive personal data, including medical records and financial details. Processing such information without claimants' knowledge and consent is deeply troubling." According to the information so far released, the data is encrypted before the originals are deleted, and is held by the DWP and its cloud computing provider. The name of the provider is one of many pieces of information about the system that have been redacted. The DWP's data protection impact assessment also says consulting individuals about this way of processing their data is "not necessary as ... these solutions will increase the efficiency of the processing". Officials say it is complementary to existing systems, and flags correspondence which is then reviewed by agents to determine whether a correspondent is in fact potentially vulnerable. The DWP said no decision was made by the AI and no data processed by it. The vast trove of text is also used by the DWP "to determine insights" and create a "theme analysis", although little more about what form that takes and how these insights have been used has been released. "Prioritising some cases inevitably deprioritises others, so it is vital to understand how these decisions are made and ensure they are fair," said Levin. "The DWP should publish data on the system's performance and introduce safeguards, including regular audits and accessible appeals processes, to protect vulnerable claimants. "Transparency and accountability must be at the heart of any AI system to ensure it supports, rather than harms, those who rely on it."
Share
Share
Copy Link
The UK's Department for Work and Pensions (DWP) has dropped several AI prototypes for the welfare system, raising concerns about transparency and effectiveness in AI adoption for public services.
The UK's Department for Work and Pensions (DWP) has discontinued or dropped at least six artificial intelligence prototypes intended for use in the welfare system, according to recent revelations 1. This development highlights the challenges facing the government's efforts to increase efficiency through AI implementation in public services.
Several AI pilots aimed at enhancing various aspects of the welfare system have been scrapped:
These cancellations come despite some of these projects being previously highlighted by the DWP as successful proofs of concept in their latest annual report.
DWP officials have internally acknowledged the difficulties in ensuring AI systems are "scalable, reliable [and] thoroughly tested" 1. The department faces several key challenges:
One AI system that has been piloted since at least 2023 is the "white mail" AI, designed to read and prioritize correspondence from benefit applicants and claimants 2. This system processes sensitive information, including:
Concerns have been raised about the lack of transparency and consent in processing this data, with benefit claimants not being informed about the use of AI in handling their correspondence 2.
Despite these setbacks, the UK government remains committed to AI adoption in public services:
The government maintains that proof of concept projects are deliberately short-term, allowing for exploration and prototyping of new technologies. They emphasize that not all projects are expected to become long-term solutions, but the learning from these initiatives can be applied in future developments 1.
As the UK government continues to pursue AI implementation in public services, balancing innovation with transparency, accountability, and data protection remains a critical challenge.
Reference
The UK government is under fire for failing to update its mandatory AI register, raising concerns about transparency and accountability in the use of artificial intelligence in public services.
2 Sources
2 Sources
A report by the UK's Public Accounts Committee highlights significant challenges in adopting AI across the public sector due to outdated technology, poor data quality, and skills shortages.
4 Sources
4 Sources
Prime Minister Keir Starmer announces a comprehensive plan to digitize public services and harness AI, aiming to save £45 billion annually in productivity gains while reshaping the British state.
8 Sources
8 Sources
The UK government unveils plans to use AI for public sector transformation, but historical challenges and skepticism raise questions about its potential success.
2 Sources
2 Sources
The UK government is revising its artificial intelligence strategy, focusing on cost-effective measures and regulatory approaches. This shift comes as the country aims to position itself as a global AI leader while managing economic pressures.
4 Sources
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved