2 Sources
[1]
UK Gov Fails To Publish Use of AI in Public Sector - Why the Transparency Gap?
In three years, just nine entries have been made on the government's algorithmic tools register. When the U.K. government committed to publishing a register of AI tools used by public agencies in August, it was celebrated by transparency campaigners who were troubled by the steady creep of automated decision-making in the public sector. Yet, although that creep shows no signs of slowing, there hasn't been a single new entry on the algorithmic tools register since the government said disclosure would be mandatory. The Algorithmic Tools Register Initially launched in 2021 as a voluntary initiative to encourage transparency, the algorithmic tools register acts as a repository where public bodies deploying AI can record details of the tools they use. While some of the tools listed on the register are fairly run-of-the-mill AI applications like a recommendation engine used to aid navigation of the government website, others are more embedded in civic decision-making. For example, the Food Standards Agency has reported that it uses an AI platform to predict which businesses might be at a higher risk of non-compliance with food hygiene regulations. This information is then used by authorities to help determine where to send inspectors. No New Entries Despite Growing AI Adoption Between 2021 and July 2024, nine public bodies registered AI tools on the government's transparency platform. However, since the government said it would make registration mandatory, there have been no new entries. Meanwhile, the Public Law Project (PLP) has identified 55 automated decision-making systems used by the Home Office, Department for Work and Pensions (DWP), Ministry of Justice and Ministry of Defence, as well as several local authorities and police constabularies. This figure has risen from 41 when the PLP register was launched in 2023. Given that the information it contains was gathered by independent researchers using freedom of information requests and public records, it also may not reveal a full and up-to-date picture. Government Transparency Failures In comments reported by the Guardian, Peter Kyle, the secretary of state for science and technology, admitted the public sector "hasn't taken seriously enough the need to be transparent in the way that the government uses algorithms." "I accept that if the government is using algorithms on behalf of the public, the public has a right to know," He added. "The only way to do that is to be transparent about their use." The most concerning aspect of the government's transparency shortcomings is the failure of the Home Office and DWP to report any of the dozens of AI tools they use identified by PLP. Names like the Home Office's "Asylum Initial Decision Model" suggest the department has incorporated algorithmic decision-making into the processing of asylum claims, yet claimants haven't been informed. Meanwhile, a suite of fraud risk tools used by DWP recalls a 2023 welfare scandal in the Netherlands. In that case, Amnesty International uncovered evidence that an AI tool used to process childcare benefit claims displayed unfair bias against racial minorities, who were disproportionately flagged as fraud risks. Without more transparency, it will be difficult to hold public sector organizations accountable, eroding public trust and increasing the likelihood of illegal or unfair AI systems going unchecked.
[2]
UK government failing to list use of AI on mandatory register
Technology secretary admits Whitehall departments are not being transparent over way they use AI and algorithms Not a single Whitehall department has registered the use of artificial intelligence systems since the government said it would become mandatory, prompting warnings that the public sector is "flying blind" about the deployment of algorithmic technology affecting millions of lives. AI is already being used by government to inform decisions on everything from benefit payments to immigration enforcement, and records show public bodies have awarded dozens of contracts for AI and algorithmic services. A contract for facial recognition software, worth up to £20m, was put up for grabs last week by a police procurement body set up by the Home Office, reigniting concerns about "mass biometric surveillance". But details of only nine algorithmic systems have so far been submitted to a public register, with none of a growing number of AI programs used in the welfare system, by the Home Office or by the police among them. The dearth of information comes despite the government announcing in February this year that the use of the AI register would now be "a requirement for all government departments". Experts have warned that if adopted uncritically, AI brings potential for harms, with recent prominent examples of IT systems not working as intended including the Post Office's Horizon software. AI in use within Whitehall ranges from Microsoft's Copilot system, which is being widely trialled, to automated fraud and error checks in the benefits system. One recent AI contract notice issued by the Department for Work and Pensions (DWP) described "a mushrooming of interest within DWP, which mirrors that of wider government and society". Peter Kyle, the secretary of state for science and technology, has admitted the public sector "hasn't taken seriously enough the need to be transparent in the way that the government uses algorithms". Asked about the lack of transparency, Kyle told the Guardian: "I accept that if the government is using algorithms on behalf of the public, the public have a right to know. The public needs to feel that algorithms are there to serve them and not the other way around. The only way to do that is to be transparent about their use." Big Brother Watch, a privacy rights campaign group, said the emergence of the police facial recognition contract, despite MPs warning of a lack of legislation to regulate its use, was "yet another example of the lack of transparency from government over the use of AI tech." "The secretive use of AI and algorithms to impact people's lives puts everyones' data rights at risk. Government departments must be open and honest about how they uses this tech," said Madeleine Stone, chief advocacy officer. The Home Office declined to comment. The Ada Lovelace Institute recently warned that AI systems might appear to reduce administrative burdens, "but can severely damage public trust and reduce public benefit if the predictions or outcomes they produce are discriminatory, harmful or simply ineffective". Imogen Parker, an associate director at the data and AI research body, said: "Lack of transparency isn't just keeping the public in the dark, it also means the public sector is flying blind in its adoption of AI. Failing to publish algorithmic transparency records is limiting the public sector's ability to determine whether these tools work, learn from what doesn't, and monitor the different social impacts of these tools." Only three algorithms have been recorded on the national register since the end of 2022. They are a system used by the Cabinet Office to identify digital records of long-term historical value, an AI-powered camera being used to analyse pedestrian crossings in Cambridge, and a system to analyse patient reviews of NHS services. But since February there have been 164 contracts with public bodies that mention AI, according to Tussell, a firm that monitors public contracts. Tech companies including Microsoft and Meta are vigorously promoting their AI systems across government. Google Cloud funded a recent report that claimed greater deployment of generative AI could free up to £38bn across the public sector by 2030. Kyle called it "a powerful reminder of how generative AI can be revolutionary for government services". Not all the latest public sector AI involves data about members of the public. One £7m contract with Derby city council is described as "Transforming the Council Using AI Technology" and a £4.5m contract with the department for education is to "improve the performance of AI for education". A spokesperson for the department of science and technology confirmed the transparency standard "is now mandatory for all departments" and said "a number of records [are] due to be published shortly".
Share
Copy Link
The UK government is under fire for failing to update its mandatory AI register, raising concerns about transparency and accountability in the use of artificial intelligence in public services.
The UK government is facing criticism for its failure to maintain transparency in the use of artificial intelligence (AI) within the public sector. Despite a commitment made in August to publish a register of AI tools used by public agencies, there has been a notable lack of new entries since the initiative was declared mandatory 12.
Launched in 2021 as a voluntary initiative, the algorithmic tools register was designed to serve as a repository for public bodies to record details of their AI deployments. However, since its inception, only nine entries have been made, with no new additions following the government's mandate for disclosure 1.
The lack of updates to the register stands in stark contrast to the growing adoption of AI in the public sector:
Several critical AI applications remain unreported on the register, raising concerns about accountability:
Peter Kyle, the Secretary of State for Science and Technology, has acknowledged the shortcomings:
"I accept that if the government is using algorithms on behalf of the public, the public has a right to know. The only way to do that is to be transparent about their use." 12
The lack of transparency has sparked warnings from experts and advocacy groups:
As the UK government grapples with balancing AI adoption and transparency, the need for a comprehensive and up-to-date register becomes increasingly crucial. Without proper disclosure, it remains challenging to hold public sector organizations accountable, potentially eroding public trust and increasing the risk of unfair or illegal AI systems going unchecked 12.
Nvidia reports record Q2 revenue of $46.7 billion, with two unidentified customers contributing 39% of the total. This concentration raises questions about the company's future prospects and potential risks.
2 Sources
Business
4 hrs ago
2 Sources
Business
4 hrs ago
Julie Sweet, CEO of Accenture, discusses the importance of AI integration in business operations and warns against failed AI projects. She emphasizes the need for companies to reinvent themselves to fully leverage AI's potential.
2 Sources
Business
4 hrs ago
2 Sources
Business
4 hrs ago
Stanford researchers have developed a brain-computer interface that can translate silent thoughts in real-time, offering hope for paralyzed individuals but raising privacy concerns.
2 Sources
Technology
4 hrs ago
2 Sources
Technology
4 hrs ago
The term 'clanker' has emerged as a popular anti-AI slur, reflecting growing tensions between humans and artificial intelligence. This story explores its origins, spread, and the complex reactions it has sparked in both anti-AI and pro-AI communities.
2 Sources
Technology
4 hrs ago
2 Sources
Technology
4 hrs ago
Tesla and Waymo are employing radically different strategies in their pursuit of autonomous ride-hailing services, with Tesla aiming for rapid expansion and Waymo taking a more cautious approach.
4 Sources
Technology
2 days ago
4 Sources
Technology
2 days ago