Curated by THEOUTPOST
On Thu, 20 Feb, 8:09 AM UTC
4 Sources
[1]
Potential cuts at AI Safety Institute stoke concerns in tech industry
Potential cuts to the U.S. AI Safety Institute (AISI) are causing alarm among some in the technology space who fear the development of responsible artificial intelligence (AI) could be at risk as President Trump works to downsize the federal government. The looming layoffs at the National Institute of Standards and Technology (NIST) could reportedly impact up to 500 staffers in the AISI or Chips for America, amping up long-held suspicions the AISI could eventually see its doors shuttered under Trump's leadership. Since taking office last month, Trump has sought to switch the White House tone on AI development, prioritizing innovation and maintaining U.S. leadership in the space. Some technology experts say the potential cuts undermine this goal and could impede America's competitiveness in the space. "It feels almost like a Trojan horse. Like, the exterior of the horse is beautiful. It's big and this message that we want the United States to be the leaders in AI, but the actual actions, the [goal] within, is the dismantling of federal responsibility and federal funding to support that mission," said Jason Corso, a robotics, electrical engineering and computer science professor at the University of Michigan. The AISI was created under the Commerce Department in 2023 in response to then-President Biden's executive order on AI. The order, which Trump rescinded on his first day in office, created new safety standards for AI among other things. The institute is responsible for developing the testing, evaluations and guidelines for what it calls "trustworthy" AI. AISI and Chips for America -- both housed under the NIST -- could be "gutted" by layoffs aimed at probationary employees, Axios reported last week. Some of these employees received verbal notices last week about upcoming terminations, though a final decision on the scope of the cuts had not yet been made, Bloomberg reported, citing anonymous sources. Neither the White House nor the Commerce Department responded to The Hill's request for comment Monday. The push comes as Trump's Department of Government Efficiency (DOGE) panel, led by tech billionaire Elon Musk, takes a sledgehammer to the federal government and calls for the layoffs of thousands of federal employees to cut down on spending. Jason Green-Lowe, the executive director for the Center for AI Policy, noted the broader NIST and the AISI are already "seriously understaffed," and any cuts may jeopardize the country's ability to create not only responsible, but effective and high-performing AI models. "Nobody wants to pay billions of dollars for deploying AI in a critical use case where the AI is not reliable," he explained. "And how do you know that your AI is reliable? Do you know it's reliable because somebody in your company's marketing department told you?" "There needs to be some kind of quality control that goes beyond just the individual company that's obviously under tremendous pressure to ship and get to market before their competitors," Green-Lowe continued. Leading AI firms including OpenAI and Anthropic have agreements allowing their models to be used for research at the AISI, including studying the risks that come with the emerging tech. The AISI's job revolves around standards development. Despite common misconceptions, it is not a regulatory agency and cannot impose regulations on the industry under the current structure. Rumors have floated the institute will eventually shut down under Trump, and Director Elizabeth Kelly stepped down earlier this month. The institute was also reportedly not included in the U.S. delegation to the AI Action Summit in Paris. By cutting back or completely closing the tech institute, some tech experts worry private companies' safety and trust goals will fall by the wayside. "There is really no direct incentive for a company to worry about safe AI as long as users will pay money for their product," said Corso, who is also the co-founder and CEO of computer vision startup Voxel51. Trump has made clear he wants the U.S. to ramp up AI development in the coming months. One of the president's first actions back in office last month was the announcement of a $500 billion investment into building AI infrastructure with the help of OpenAI, SoftBank and Oracle. Meanwhile, the White House Office of Science and Technology put out a request for information on the development of AI to create an "AI Action Plan" later this year. Vice President Vance doubled down on the administration's stance earlier this month, slamming "excessive regulation" in a speech at the AI Action Summit in Paris. That followed Trump's executive order last month to remove "barriers" to U.S. leadership in the space. And Commerce Secretary Howard Lutnick recommended developing AI standards at the NIST, comparing it to the department's work on cyber technology and rules. The prospective layoffs or funding cuts will contradict the administration's remarks and moves so far and hinder America's competitive edge, various industry observers told The Hill. "If we're going to be doing all of this investment in AI, then we need a proportional increase in the investment in the people who are doing guidelines and standards and guardrails," Green-Lowe said. "Instead, we're throwing out some of the best technical talent." "It weakens our competitive position," he added. "If the government is serious about being in a tight race with China or others, if they're serious that we need every advantage we can get ... one of those advantages would be leading the way on development." Many of these probationary employees are "where a lot of the AI talent is," given the increasing interest in AI over the past year, Eric Gastfriend, the executive director of nonprofit Americans for Responsible Innovation, told The Hill. The global AI race heated up over the past few months, especially last month, after the high-performing and cheaply built Chinese AI model DeepSeek took the internet and stock markets by storm. "We want to have a clear picture of where China is on this technology, and the AI Safety Institute has the technical talent to be able to evaluate models like DeepSeek," Gastfriend said, adding, the institute is "getting understandings and evaluations of ... the capabilities of models and what dangers and threats do they pose." David Sacks, Trump's AI and crypto czar, called DeepSeek a "wake-up call" for AI innovation but brushed off concerns it will outperform American-made models. The U.S. has repeatedly tried to ensure the production of AI-powering technology, most notably chip manufacturing, remains in America. During his confirmation hearing last month, Lutnick pledged to take an aggressive approach toward chips production. Still, some in the AI space do not think innovation of models or equipment will come to a major halt if these layoffs take place, underscoring the continued debate around the path forward. "I have a lot of confidence in the private sector to innovate, and I don't believe that we need the government to do the research for us or to fund research, except in special cases," said Matt Calkins, the co-founder and CEO of cloud computing and software company Appian. Today, AI is "the subject of more frantic private sector investment than anything since railroads," Calkins added. "We absolutely don't need the government to do any innovation for us." He further brushed aside concerns there will be an "immediate" risk to the safety of AI development. "All AI is converging to the same place, and it's a tool that's very valuable, and you can do some bad things with it, you can do some good things with it," he said, adding, "We know the situation across the industry, and when danger is seen, then we will doubtless wish to address it." Should the layoffs take place, or the institute lose funding, some experts suggested DOGE efforts are moving fast and corrections down the road might be needed to address what Green-Lowe described as "unintended consequences."
[2]
US AI Safety Institute will be 'gutted,' Axios reports
Sources at NIST are preparing for mass firings that would severely undermine the AI regulator. Here's what that means. After reversing a Biden-era executive order on AI regulation and firing staff across several government agencies, the Trump administration is gearing up to make cuts to the US AI Safety Institute (AISI) next. On Wednesday, Axios reported that probationary employees at the National Institute of Standards and Technology (NIST), which houses AISI, are "bracing to be fired imminently." Sources told Axios they are preparing for 497 roles to be cut, which they believe will leave the AISI "gutted." Also: If Musk wants AI for the world, why not open-source all the Grok models? The AISI was created to oversee and conduct AI model testing and partner with developers on regulation efforts. AISI signed agreements with AI companies including Anthropic and OpenAI on safety and research initiatives and created a national security task force. The cuts will impact more than just AI safety and regulation. The 497 roles also span semiconductor production efforts. Axios reports that the cuts will include "74 postdocs, 57% of CHIPS staff focused on incentives," and "67% of CHIPS staff focused on R&D," referring to the 2022 government initiative to ramp up chip development in the US. Axios notes those losses would amount to "most staff" working on the CHIPS initiative -- a somewhat confusing choice given the bullishness the Trump administration has maintained about gaining an AI advantage over China, not to mention the program's national security drivers. Also: The head of US AI safety has stepped down. What now? News of the anticipated firings come shortly after the Trump administration left AISI staff out of its delegation for last week's AI Action Summit in Paris -- which focused heavily on AI safety and security -- and just weeks after AISI director Elizabeth Kelly stepped down, ostensibly due to political pressure. For those reasons, the cuts won't come as a surprise given Trump's AI agenda, which de-emphasizes safety and regulation in the name of "AI dominance."
[3]
US AI Safety Institute could face big cuts | TechCrunch
The National Institute of Standards and Technology could fire as many as 500 staffers, according to multiple reports -- cuts that further threaten a fledgling AI safety organization. Axios reported this week that the US AI Safety Institute (AISI) and Chips for America, both part of NIST, would be "gutted" by layoffs targeting probationary employees (who are typically in their first year or two on the job). And Bloomberg said some of those employees had already been given verbal notice of upcoming terminations. Even before the latest layoff reports, AISI's future was looking uncertain. The institute, which is supposed to study risks and develop standards around AI development, was created last year as part of then-President Joe Biden's executive order on AI safety. President Donald Trump repealed that order on his first day back in office, and AISI's director departed earlier in February. Fortune spoke to a number of AI safety and policy organizations who all criticized the reported layoffs. "These cuts, if confirmed, would severely impact the government's capacity to research and address critical AI safety concerns at a time when such expertise is more vital than ever," said Jason Green-Lowe, executive director of the Center for AI Policy.
[4]
The head of US AI safety has stepped down. What now?
Large-scale shifts at US government agencies that monitor AI development are underway. Where does that leave AI regulation? In October 2023, former president Joe Biden signed an executive order that included several measures for regulating AI. On his first day in office, President Trump overturned it, replacing it a few days later with his own order on AI in the US. This week, some government agencies that enforce AI regulation were told to halt their work, while the director of the US AI Safety Institute (AISI) stepped down. The National Institute of Standards and Technology (NIST) is apparently preparing for mass firings that would further impact AISI. Also: ChatGPT's Deep Research just identified 20 jobs it will replace. Is yours on the list? So what does this mean practically for the future of AI regulation? Here's what you need to know. In addition to naming several initiatives around protecting civil rights, jobs, and privacy as AI accelerates, Biden's order focused on responsible development and compliance. However, as ZDNET's Tiernan Ray wrote at the time, the order could have been more specific, leaving loopholes available in much of the guidance. Though it required companies to report on any safety testing efforts, it didn't make red-teaming itself a requirement, or clarify any standards for testing. Ray pointed out that because AI as a discipline is very broad, regulating it needs -- but is also hampered by -- specificity. A Brookings report noted in November that because federal agencies absorbed many of the directives in Biden's order, they may protect them from Trump's repeal. But that protection is looking less and less likely. Also: Why rebooting your phone daily is your best defense against zero-click hackers Biden's order established the US AI Safety Institute (AISI), which is part of NIST. The AISI conducted AI model testing and worked with developers to improve safety measures, among other regulatory initiatives. In August, AISI signed agreements with Anthropic and OpenAI to collaborate on safety testing and research; in November, it established a testing and national security task force. Earlier this month, likely due to Trump administration shifts, AISI director Elizabeth Kelly announced her departure from the institute via LinkedIn. The fate of both initiatives, and the institute itself, is now unclear. The Consumer Financial Protection Bureau (CFPB) also carried out many of the Biden order's objectives. For example, a June 2023 CFPB study on chatbots in consumer finance noted that they "may provide incorrect information, fail to provide meaningful dispute resolution, and raise privacy and security risks." CFPB guidance states lenders have to provide reasons for denying someone credit regardless of whether or not their use of AI makes this difficult or opaque. In June 2024, CFPB approved a new rule to ensure algorithmic home appraisals are fair, accurate, and comply with nondiscrimination law. CFPB is in charge of ensuring companies comply with anti-discrimination measures like the Equal Credit Opportunity Act and the Consumer Financial Protection Act, and has noted that AI adoption can exacerbate discrimination and bias. In an August 2024 comment, CFPB noted it was "focused on monitoring the market for consumer financial products and services to identify risks to consumers and ensure that companies using emerging technologies, including those marketed as 'artificial intelligence' or 'AI,' do not violate federal consumer financial protection laws." It also stated it was monitoring "the future of consumer finance" and "novel uses of consumer data." "Firms must comply with consumer financial protection laws when adopting emerging technology," the comment continues. It's unclear what body would enforce this if CFPB radically changes course or ceases to exist under new leadership. On January 23rd, President Trump signed his own executive order on AI. In terms of policy, the single-line directive says only that the US must "sustain and enhance America's global AI dominance in order to promote human flourishing, economic competitiveness, and national security." Unlike Biden's order, terms like "safety," "consumer," "data," and "privacy" don't appear at all. There are no mentions of whether the Trump administration plans to prioritize safeguarding individual protections or address bias in the face of AI development. Instead, it focuses on removing what the White House called "unnecessarily burdensome requirements for companies developing and deploying AI," seemingly focusing on industry advancement. Also: If you're not working on quantum-safe encryption now, it's already too late The order goes on to direct officials to find and remove "inconsistencies" with it in government agencies -- that is to say, remnants of Biden's order that have been or are still being carried out. In March 2024, the Biden administration released an additional memo stating government agencies using AI would have to prove those tools weren't harmful to the public. Like other Biden-era executive orders and related directives, it emphasized responsible deployment, centering AI's impact on individual citizens. Trump's executive order notes that it will review (and likely dismantle) much of this memo by March 24th. This is especially concerning given that last week, OpenAI released ChatGPT Gov, a version of OpenAI's chatbot optimized for security and government systems. It's unclear when government agencies will get access to the chatbot or whether there will be parameters around how it can be used, though OpenAI says government workers already use ChatGPT. If the Biden memo -- which has since been removed from the White House website -- is gutted, it's hard to say whether ChatGPT Gov will be held to any similar standards that account for harm. Trump's executive order gave his staff 180 days to come up with an AI policy, meaning its deadline to materialize is July 22nd. On Wednesday, the Trump administration put out a call for public comment to inform that action plan. The Trump administration is disrupting AISI and CFPB -- two key bodies that carry out Biden's protections -- without a formal policy in place to catch fallout. That leaves AI oversight and compliance in a murky state for at least the next six months (millennia in AI development timelines, given the rate at which the technology evolves), all while tech giants become even more entrenched in government partnerships and initiatives like Project Stargate. Also: How AI will transform cybersecurity in 2025 - and supercharge cybercrime Considering global AI regulation is still far behind the rate of advancement, perhaps it was better to have something rather than nothing. "While Biden's AI executive order may have been mostly symbolic, its rollback signals the Trump administration's willingness to overlook the potential dangers of AI," said Peter Slattery, a researcher on MIT's FutureTech team who led its Risk Repository project. "This could prove to be shortsighted: a high-profile failure -- what we might call a 'Chernobyl moment' -- could spark a crisis of public confidence, slowing the progress that the administration hopes to accelerate." "We don't want advanced AI that is unsafe, untrustworthy, or unreliable -- no one is better off in that scenario," he added.
Share
Share
Copy Link
Reports of potential layoffs at the US AI Safety Institute have sparked alarm in the tech industry, raising questions about the future of AI regulation and safety measures in the United States.
The US AI Safety Institute (AISI), housed within the National Institute of Standards and Technology (NIST), is facing potential significant cuts that could severely impact its operations. Reports suggest that up to 500 staffers in the AISI or Chips for America program could be affected by looming layoffs 12. This development has raised concerns among technology experts and industry observers about the future of AI regulation and safety measures in the United States.
The AISI was established in 2023 under the Commerce Department in response to then-President Biden's executive order on AI 1. Its primary responsibilities include:
Since taking office, President Trump has shifted the White House's approach to AI development:
The reported cuts have sparked several concerns among experts:
Undermining AI Safety: Jason Corso, a professor at the University of Michigan, suggests that the cuts could impede the development of responsible AI 1.
Competitive Edge: Some fear that reducing federal support for AI safety could hinder America's competitiveness in the field 1.
Loss of Expertise: Eric Gastfriend of Americans for Responsible Innovation notes that many probationary employees targeted for layoffs represent valuable AI talent 1.
Quality Control: Jason Green-Lowe from the Center for AI Policy emphasizes the need for independent quality control in AI development 13.
The potential cuts at AISI are part of a larger trend affecting AI regulation and oversight in the U.S.:
Leading AI firms, including OpenAI and Anthropic, have agreements with AISI for research purposes 1. The potential dismantling of the institute could impact these collaborations and the broader effort to establish industry-wide safety standards.
With the changes in administration and potential cuts to regulatory bodies, the future of AI regulation in the U.S. remains uncertain. The Trump administration's focus on "AI dominance" and reducing "burdensome requirements" for AI companies 4 stands in contrast to the previous administration's emphasis on responsible development and compliance.
As the situation continues to evolve, stakeholders in the AI industry, policymakers, and the public will be closely watching how these changes impact the development, deployment, and regulation of AI technologies in the United States.
The Trump administration revokes Biden's AI executive order, signaling a major shift towards deregulation and market-driven AI development in the US. This move raises concerns about safety, ethics, and international cooperation in AI governance.
4 Sources
4 Sources
Recent layoffs and budget cuts at the National Science Foundation, spearheaded by the Trump administration, are raising concerns about the future of AI research and America's competitive edge in artificial intelligence.
3 Sources
3 Sources
Government officials and AI experts from multiple countries meet in San Francisco to discuss AI safety measures, while Trump's vow to repeal Biden's AI policies casts uncertainty over future regulations.
8 Sources
8 Sources
A coalition of over 60 tech companies, nonprofits, and academic institutions are calling on Congress to pass legislation authorizing the U.S. AI Safety Institute within NIST before the end of 2024, citing concerns about national competitiveness and AI safety.
4 Sources
4 Sources
Anthropic, a major AI company, has quietly removed Biden-era AI safety commitments from its website and submitted new policy recommendations to the Trump administration, signaling a significant shift in the AI regulatory landscape.
5 Sources
5 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved