Curated by THEOUTPOST
On Fri, 14 Feb, 4:04 PM UTC
4 Sources
[1]
UK AI Safety Institute rebranded AI Security Institute
Plus: Keep calm and plug Anthropic's Claude into public services Comment The UK government on Friday said its AI Safety Institute will henceforth be known as its AI Security Institute, a rebranding that attests to a change in regulatory ambition from ensuring AI models get made with wholesome content - to one that primarily punishes AI-abetted crime. "This new name will reflect its focus on serious AI risks with security implications, such as how the technology can be used to develop chemical and biological weapons, how it can be used to carry out cyber-attacks, and enable crimes such as fraud and child sexual abuse," the government said in a statement of the retitled public body. AI safety - "research, strategies, and policies aimed at ensuring these systems are reliable, aligned with human values, and not causing serious harm," as defined by The Brookings Institution - has seen better days. Between Meta's dissolution of its Responsible AI Team in late 2023, the refusal of Apple and Meta to sign the EU's AI Pact last year, the Trump administration ripping up Biden-era AI safety rules, and concern about AI competition from China, there appears to be less appetite for preventive regulation - like what the US Food and Drug Administration tries to do with the food supply - and more interest in proscriptive regulation - enjoy your biased, racist AI but don't use it to commit acts of terror or sex crimes. "[The AI Security Institute] will not focus on bias or freedom of speech, but on advancing our understanding of the most serious risks posed by the technology to build up a scientific basis of evidence which will help policymakers to keep the country safe as AI develops," the UK government said, championing unfettered discourse in a way not evident in its reported stance on encryption. Put more bluntly, the UK is determined not to regulate the country out of the economic benefit of AI investment and associated labor consequences - AI jobs and AI job replacement. ... helping us to unleash AI and grow the economy ... Peter Kyle, Secretary of State for Science, Innovation, and Technology, said as much in a statement: "The changes I'm announcing today represent the logical next step in how we approach responsible AI development - helping us to unleash AI and grow the economy as part of our Plan for Change." That plan being the Labour government's blueprint of priorities. A key partner in that plan now is Anthropic, which has distinguished itself from rival OpenAI by staking out the moral high ground among commercial AI firms. Built by ex-OpenAI staff and others, it identifies itself as "a safety-first company," though whether that matters much anymore remains to be seen. Anthropic and the UK's Department for Science, Innovation and Technology (DSIT) have signed a Memorandum of Understanding to make AI tools that can be integrated into UK government services for citizens. "AI has the potential to transform how governments serve their citizens," said Dario Amodei, CEO and co-founder of Anthropic, in a statement. "We look forward to exploring how Anthropic's AI assistant Claude could help UK government agencies enhance public services, with the goal of discovering new ways to make vital information and services more efficient and accessible to UK residents." Allowing AI to deliver government services has gone swimmingly in New York City, where the MyCity Chatbot, which relies on Microsoft's Azure AI, last year gave business owners advice that violated the law. The Big Apple addressed this not by demanding its AI model that gets things right but by adding this disclaimer in a popup window: The disclaimer dialogue window also comes with a you're-to-blame-if-you-use-this checkbox, "I agree to the MyCity Chatbot's beta limitations." Problem solved. Anthropic appears to be more optimistic about its technology and cites several government agencies that have already befriended its Claude family of LLMs. The San Francisco upstart notes that the Washington, DC Department of Health has partnered with Accenture to build a Claude-based bilingual chatbot to make its services more accessible and to provide health information on demand. Then there's the European Parliament, which uses Claude for document search and analysis - so far without the pangs of regret evident among those using AI for legal support. In England, Swindon Borough Council offers a Claude-based tool called "Simply Readable," hosted on Amazon Bedrock, that makes documents more accessible for people with disabilities by reformatting them with larger font, increased spacing, and additional images. The result has been significant financial savings, it's claimed. Where previously documents of 5-10 pages cost around £600 to convert, Simply Readable does the job for just 7-10 pence, freeing funds for other social services. According to the UK's Local Government Association (LGA), the tool has delivered a 749,900 percent return on investment. "This staggering figure underscores the transformative potential of 'Simply Readable' and AI-powered solutions in promoting social inclusion while achieving significant cost savings and improved operational efficiency," the LGA said earlier this month. No details are offered on whether this AI savings entailed a cost in jobs or expenditures in the form of Jobseeker's Allowance. But Anthropic in time may have some idea about that. The UK government deal involves using the AI firm's recently announced Economic Index, which uses anonymized Claude conversations to estimate AI's impact on labor markets. ®
[2]
UK drops 'safety' from its AI body, now called AI Security Institute, inks MOU with Anthropic | TechCrunch
The U.K. government wants to make a hard pivot into boosting its economy and industry with AI, and as part of that, it's pivoting an institution that it founded a little over a year ago for a very different purpose. Today the Department of Science, Industry and Technology announced that it would be renaming the AI Safety Institute to the "AI Security Institute." With that, it will shift from primarily exploring areas like existential risk and bias in Large Language Models, to a focus on cybersecurity, specifically "strengthening protections against the risks AI poses to national security and crime." Alongside this, the government also announced a new partnership with Anthropic. No firm services announced but MOU indicates the two will "explore" using Anthropic's AI assistant Claude in public services; and Anthropic will aim to contribute to work in scientific research and economic modelling. And at the AI Security Institute, it will provide tools to evaluate AI capabilities in the context of identifying security risks. "AI has the potential to transform how governments serve their citizens," Anthropic co-founder and CEO Dario Amodei said in a statement. "We look forward to exploring how Anthropic's AI assistant Claude could help UK government agencies enhance public services, with the goal of discovering new ways to make vital information and services more efficient and accessible to UK residents." Anthropic is the only company being announced today -- coinciding with a week of AI activities in Munich and Paris -- but it's not the only one that is working with the government. A series of new tools that were unveiled in January were all powered by OpenAI. (At the time, Peter Kyle, the Secretary of State for Technology, said that the government planned to work with various foundational AI companies, and that is what the Anthropic deal is proving out.) The government's switch-up of the AI Safety Institute -- launched just over a year ago with a lot of fanfare -- to AI Security shouldn't come as too much of a surprise. When the newly-installed Labour government announced its AI-heavy Plan for Change in January, it was notable that the words "safety," "harm," "existential," and "threat" did not appear at all in the document. That was not an oversight. The government's plan is to kickstart investment in a more modernized economy, using technology and specifically AI to do that. It wants to work more closely with Big Tech, and it also wants to build its own homegrown big techs. The main messages it's been promoting have development, AI, and more development. Civil Servants will have their own AI assistant called "Humphrey," and they're being encouraged to share data and use AI in other areas to speed up how they work. Consumers will be getting digital wallets for their government documents, and chatbots. So have AI safety issues been resolved? Not exactly, but the message seems to be that they can't be considered at the expense of progress. The government claimed that despite the name change, the song will remain the same. "The changes I'm announcing today represent the logical next step in how we approach responsible AI development - helping us to unleash AI and grow the economy as part of our Plan for Change," Kyle said in a statement. "The work of the AI Security Institute won't change, but this renewed focus will ensure our citizens - and those of our allies - are protected from those who would look to use AI against our institutions, democratic values, and way of life." "The Institute's focus from the start has been on security and we've built a team of scientists focused on evaluating serious risks to the public," added Ian Hogarth, who remains the chair of the institute. "Our new criminal misuse team and deepening partnership with the national security community mark the next stage of tackling those risks." Further afield, priorities definitely appear to have changed around the importance of "AI Safety". The biggest risk the AI Safety Institute in the U.S. is contemplating right now, is that it's going to be dismantled. U.S. Vice President J.D. Vance telegraphed as much just earlier this week during his speech in Paris.
[3]
UK renames AI Security Institute, drops "safety" in pivot to cybersecurity
The new Labour Government has gone all in on AI since taking power in 2024, and was supported by the UK's AI Safety Institute (AISI) - but not for much longer. The institution will remain, but the government has announced that this will now be renamed to UK AI Security Institute - signalling a definite shift towards cybersecurity. But what's the difference? Well security is focused on defending against things like cyberattacks, and mitigating security risks including how the technology can be leveraged against national security or can enable crimes such as fraud or developing chemical weapons. Safety has a slightly wider scope - also aimed at mitigating the risks of cyberattacks, but going further to protect against misinformation spread by chatbots, and aims to assess the societal impacts of AI models, rather than just the immediate cybersecurity threats. The UK's widely publicised Plan for Change, released in January 2025, leant heavily on AI, introducing 'Growth Zones', handing public data over to train models, and aiming to see AI 'mainlined into the veins' of public services - but not once did the document mention the words 'harm', 'safety', or 'threat', TechCrunch noted. Despite these omissions, the work of the AI Security Institute is still the same, says Secretary of State for Science, Innovation, and Technology, Peter Kyle; "The work of the AI Security Institute won't change, but this renewed focus will ensure our citizens - and those of our allies - are protected from those who would look to use AI against our institutions, democratic values, and way of life." As part of the new plan, the government has agreed to a new partnership with AI firm Anthropic, working to 'realise the technology's opportunities, with a continued focus on the responsible development and deployment of AI systems.' This will include 'insights' on how AI can 'transform public services and improve the lives of citizens', as well as drive scientific development. This is part of the UK's ambition to attract tech investments from around the world - trying to foster an environment perfect for AI innovation, seemingly free of safety regulations.
[4]
UK's AI Safety Institute renamed to reflect new focus
The organisation, now called the AI Security Institute, is a key player in the UK government's strategy for change and development. Speaking at the Munich Security Conference, secretary of state for science, innovation and technology, Peter Kyle has announced that, amid a refocused agenda, the UK's AI Safety Institute has been renamed the AI Security Institute. The new moniker is in line with the UK watchdogs plans to seriously tackle AI associated risks and potential security implications. To achieve its goals the Institute has partnered with governmental departments, such as the Defence Science and Technology Laboratory, which is the ministry of defence's science and technology organisation and will be used to assess the risks posed by AI. The Institute will also launch a new criminal misuse team which will work jointly with the Home Office conducting research on crime and the security threats to UK citizens. With a focus on AI-related threats to national security, it will cover a broad range of issues, for example fraud, cyberattacks and how technologies can be used to develop chemical and biological weaponry. Another major problem, which the Institute intends to tackle, is the issue of predators using AI tools to generate child sexual abuse material. The new team will explore methods to prevent people from utilising this technology. The Institute will move away from issues of bias and freedom of speech and instead will prioritise establishing a scientific basis of evidence, enabling policymakers to set regulations as AI continues to develop and advance. This will involve working alongside the Laboratory for AI Security Research and the national security community, building on the expertise of the National Cyber Security Centre. In a statement Kyle said: "The changes I'm announcing today represent the logical next step in how we approach responsible AI development, helping us to unleash AI and grow the economy as part of our Plan for Change. "The work of the AI Security Institute won't change, but this renewed focus will ensure our citizens and those of our allies are protected from those who would look to use AI against our institutions, democratic values and way of life. "The main job of any government is ensuring its citizens are safe and protected and I'm confident the expertise our Institute will be able to bring to bear will ensure the UK is in a stronger position than ever to tackle the threat of those who would look to use this technology against us." A new agreement has also been struck between the UK's new Sovereign AI unit and AI company Anthropic, which will see both sides collaborating on technology opportunities, as well as the responsible development and deployment of AI systems. Areas of focus will include how AI can transform public services and improve the lives of citizens. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
Share
Share
Copy Link
The UK government has renamed its AI Safety Institute to AI Security Institute, signaling a shift in focus from AI safety to cybersecurity and economic growth. This move is accompanied by a new partnership with AI company Anthropic to explore AI applications in public services.
The UK government has announced a significant shift in its approach to artificial intelligence (AI) regulation and development. The AI Safety Institute, established just over a year ago, has been renamed the AI Security Institute, reflecting a change in focus from broad AI safety concerns to more specific security implications 12.
Peter Kyle, Secretary of State for Science, Innovation, and Technology, stated that this change represents "the logical next step in how we approach responsible AI development - helping us to unleash AI and grow the economy as part of our Plan for Change" 1. The institute will now prioritize addressing serious AI risks with security implications, such as:
This pivot aligns with the UK government's broader strategy to boost its economy and industry through AI adoption and innovation 2.
Alongside the rebranding, the UK government announced a new partnership with AI company Anthropic. The Memorandum of Understanding (MOU) between Anthropic and the Department for Science, Innovation and Technology outlines plans to:
Dario Amodei, CEO and co-founder of Anthropic, expressed enthusiasm about the potential of AI to transform government services and enhance accessibility for UK residents 12.
The rebranding signals a move away from preventive regulation towards a more proscriptive approach. The institute will no longer focus on issues such as bias or freedom of speech in AI systems 1. Instead, it aims to build a scientific evidence base to help policymakers ensure national safety as AI technology develops 3.
This change in direction is part of a broader trend, with other countries also reconsidering their approach to AI regulation. In the United States, for example, there are indications that the AI Safety Institute may face challenges or potential dismantling 2.
The UK government's new strategy emphasizes the integration of AI into public services. Examples of AI applications in government include:
These initiatives highlight the potential for AI to improve efficiency and accessibility in public services while generating significant cost savings 1.
While the government asserts that the work of the AI Security Institute won't fundamentally change, some observers have noted the absence of terms like "safety," "harm," and "threat" in recent policy documents 23. This has raised questions about whether AI safety issues have been adequately addressed or if they are being sidelined in favor of economic growth and technological progress 2.
As the UK positions itself as a hub for AI innovation and investment, the balance between fostering technological advancement and ensuring responsible AI development remains a critical challenge for policymakers and industry leaders alike.
Reference
[1]
[2]
[4]
Leading AI companies OpenAI and Anthropic have agreed to collaborate with the US AI Safety Institute to enhance AI safety and testing. This partnership aims to promote responsible AI development and address potential risks associated with advanced AI systems.
5 Sources
5 Sources
The UK government has introduced a new AI safety platform to help businesses develop and use AI responsibly, aiming to make the country a global hub for AI expertise and innovation.
3 Sources
3 Sources
The UK government is revising its artificial intelligence strategy, focusing on cost-effective measures and regulatory approaches. This shift comes as the country aims to position itself as a global AI leader while managing economic pressures.
4 Sources
4 Sources
The AI Action Summit in Paris marks a significant shift in global attitudes towards AI, emphasizing economic opportunities over safety concerns. This change in focus has sparked debate among industry leaders and experts about the balance between innovation and risk management.
7 Sources
7 Sources
The new UK government has announced a series of bills focusing on AI regulation, cybersecurity, and digital resilience in the King's Speech. These measures aim to position the UK as a global tech leader while addressing concerns about AI safety and digital infrastructure.
5 Sources
5 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved