Curated by THEOUTPOST
On Thu, 6 Mar, 4:02 PM UTC
5 Sources
[1]
Anthropic Backs Classified Info-Sharing Between AI Companies, US Government
As 'powerful AI systems' threaten to upend national security and the economy, Anthropic backs 'classified communication channels between AI labs and intelligence agencies.' Anthropic says the US government needs classified communication channels with AI companies. The recommendation is one of many in a 10-page document that Anthropic submitted to the US Office of Science and Technology Policy (OSTP) in response to the Trump administration's call for public comment on its AI action plan. From Anthropic's perspective, the US needs to prepare for powerful AI systems capable of "matching or exceeding" the intellectual capacity of Nobel Prize winners, which could arrive as soon as 2026 or 2027. It points to the progress of its latest model, the Pokémon-playing Claude 3.7 Sonnet, as proof of how fast the tech is evolving. "Classified communication channels between AI labs and intelligence agencies" could help the US combat national security threats, along with "expedited security clearances for industry professionals" and a new set of security standards for AI infrastructure. But will that leave the public in the dark about critical decisions, as many jobs and industries are grappling with the effects of AI? Soon, AIs will be able to do jobs that "highly capable" humans can do today, including navigating digital interfaces and "interfacing with the physical world" by controlling lab equipment and manufacturing tools. Anthropic says this could lead to "potential large-scale changes to the economy." To monitor these changes, it recommends "modernizing economic data collection, like the Census Bureau's surveys." President Trump reversed the Biden administration's executive order on AI and replaced it with one titled "Removing Barriers to American Leadership in Artificial Intelligence." Although the new administration is expected to take a relatively hands-off approach to AI regulation, Anthropic says the government needs to stay involved. It should track the development of AI systems, create "standard assessment frameworks," and accelerate its own adoption of AI tools, which is one stated goal of Elon Musk's Department of Government Efficiency (DOGE). Anthropic also calls for building more AI infrastructure, such as the $500 billion Stargate project, and further restricting semiconductor exports to adversaries. "We believe the United States must take decisive action to maintain technological leadership," Anthropic says. In the past, Anthropic CEO Dario Amodei has supported government regulations for potentially threatening AI systems. The company wrote a lengthy letter in support of California's AI safety bill, citing the "importance of averting catastrophic misuse" of the technology. Governor Gavin Newsom ultimately vetoed the bill over concern that it only targeted large tech companies and ignored the threats presented by smaller ones. The comment period for the Trump administration's AI action plan ends on March 15.
[2]
Anthropic Quietly Removes Biden-Era AI Safety Pledge
Disclaimer: This content generated by AI & may have errors or hallucinations. Edit before use. Read our Terms of use Major US artificial intelligence (AI) firm Anthropic has quietly removed the voluntary commitments it had made towards AI safety last year, AI watchdog group The Midas Project informed yesterday. Anthropic removed "White House's Voluntary Commitments for Safe, Secure, and Trustworthy AI," which was introduced during US President Joe Biden's term. The commitments were removed "seemingly without a trace, from their webpage "Transparency Hub", The Midas Project remarked. It also noted that other changes apart from this one remained minor. Why It Matters In July 2023, several AI companies including Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI, had agreed to comply with the aforementioned AI commitments. This was received with much celebration and hopes of building an AI ecosystem that would champion transparency and safety while also bringing the much-touted AI solutions to people's daily lives. As part of this commitment, Anthropic stated that it would: Do these forward-looking company policies then need to change with changing administrations? The Midas Project pointed out in its tweet: "nothing in the commitments suggested that the promise was (1) time-bound or (2) contingent on the party affiliation of the sitting president." This step seems to be a part of a trend with safety and trust in AI taking a backseat. OpenAI in the last couple of months has rolled out updates and initiatives that have focused on the newly elected government's stance on key policy matters. Just last month, it updated its policy to state that its AI models should "empower people to explore, debate, and create without arbitrary restrictions -- no matter how challenging or controversial a topic may be." Additionally, in January this year, it also launched a new ChatGPT version tailor-made for the United States Government called ChatGPT Gov. The company informed that the launch of the tool reflects its "commitment to helping U.S. government agencies leverage OpenAI's technology" and also referred to one of US President Donald Trump's executive orders (EOs). Another important thing noted by The Midas Project about this move was that Anthropic removed the commitments from its website - an online resource aimed at "raising the bar on transparency" - without any trace. Changing landscape of AI regulations AI regulations in the US have undergone some major changes since US President Donald Trump took over the country's administration for the second time. In one of the first moves he made after assuming the office, he signed several EOs, many of which repealed actions taken under Biden. One of them was a 2023 directive that outlined measures for ensuring AI safety and security, citizen privacy, equity, protection of consumers and workers' rights, and promoting innovation. The EO that repealed the earlier directive was among many others that are being seen as regressive steps in American political scenario. While some believe some of these EOs will face legal challenges since they are subject to judicial review and may be blocked if they violate the Constitution of the United States, the same cannot be said about the one on AI safety commitments. This could be because AI in general remains a topic that is largely still under discussion at the regulatory level or can even be a point of divergence. Just last month, both the US and UK refused to sign the Paris AI Action Summit Joint Statement on "safe" AI. While UK said it had concerns about how national security plays out under the provisions of the statement, US said it was against too much regulation and instead prioritised innovation over safety in the AI domain. Back home, the Indian government has also been giving mixed signals about its approach with its fluctuating stance on heavy and light touch regulation. Read More:
[3]
Anthropic submits AI policy recommendations to the White House
A day after quietly removing Biden-era AI policy commitments from its website, Anthropic submitted recommendations to the White House for a national AI policy that the company says "better prepare[s] America to capture the economic benefits" of AI. The company's suggestions include preserving the AI Safety Institute established under the Biden Administration, directing NIST to develop national security evaluations for powerful AI models, and building a team within the government to analyze potential security vulnerabilities in AI. Anthropic also calls for hardened AI chip export controls, particularly restrictions on the sale of Nvidia H20 chips to China, in the interest of national security. To fuel AI data centers, Anthropic recommends the U.S. establish a national target of building 50 additional gigawatts of power dedicated to the AI industry by 2027. Several of the policy suggestions closely align with former President Biden's AI Executive Order, which Trump repealed in January. Critics allied with Trump argued that the order's reporting requirements were onerous.
[4]
Anthropic quietly removes Biden-era AI policy commitments from its website | TechCrunch
Anthropic has quietly removed from its website several voluntary commitments the company made in conjunction with the Biden Administration in 2023 to promote safe and "trustworthy" AI. The commitments, which included pledges to share information on managing AI risks across industry and government and research on AI bias and discrimination, were deleted from Anthropic's transparency hub last week, according to AI watchdog group The Midas Project. Other Biden-era commitments relating to reducing AI-generated image-based sexual abuse remain. Anthropic appears to have given no notice of the change. The company didn't immediately respond to a request for comment. Anthropic, along with companies including OpenAI, Google, Microsoft, Meta, and Inflection, announced in July 2023 that it had agreed to adhere to certain voluntary AI safety commitments proposed by the Biden Administration. The commitments included internal and external security tests of AI systems before release, investing in cybersecurity to protect sensitive AI data, and developing methods of watermarking AI-generated content. To be clear, Anthropic had already adopted a number of the practices outlined in the commitments, and the accord wasn't legally binding. But the Biden Administration's intent was to signal its AI policy priorities ahead of the more exhaustive AI Executive Order, which came into force several months later. The Trump Administration has indicated that its approach to AI governance will be quite different. In January, President Trump repealed the aforementioned AI Executive Order, which had instructed the National Institute of Standards and Technology to author guidance that helps companies identify -- and correct -- flaws in models, including biases. Critics allied with Trump argued that the order's reporting requirements were onerous and effectively forced companies to disclose their trade secrets. Shortly after revoking the AI Executive Order, Trump signed an order directing federal agencies to promote the development of AI "free from ideological bias" that promotes "human flourishing, economic competitiveness, and national security." Importantly, Trump's order made no mention of combatting AI discrimination, which was a key tenet of Biden's initiative. As The Midas Project noted in a series of posts on X, nothing in the Biden-era commitments suggested that the promise was time-bound or contingent on the party affiliation of the sitting president. In November, following the election, multiple AI companies confirmed that their commitments hadn't changed. Anthropic isn't the only firm to adjust its public policies in the months since Trump took office. OpenAI recently announced it would embrace "intellectual freedom ... no matter how challenging or controversial a topic may be," and work to ensure that its AI doesn't censor certain viewpoints. OpenAI also scrubbed a page on its website that used to express the startup's commitment to diversity, equity, and inclusion, or DEI. These programs have come under fire from the Trump Administration, leading a number of companies to eliminate or substantially retool their DEI initiatives. Many of Trump's Silicon Valley advisers on AI, including Marc Andreessen, David Sacks, and Elon Musk, have alleged that companies, including Google and OpenAI, have engaged in AI censorship by limiting their AI chatbots' answers. Labs including OpenAI have denied that their policy changes are in response to political pressure. Both OpenAI and Anthropic have or are actively pursuing government contracts.
[5]
Anthropic quietly scrubs Biden-era responsible AI commitment from its website
AI companies continue to reduce evidence of Biden-era AI safety policy from their communications as attitudes shift under Trump. Anthropic appears to have removed Biden-era commitments to creating safe AI from its website. Originally flagged by an AI watchdog called The Midas Project, the language was removed last week from Anthropic's transparency hub, where the company lists its "voluntary commitments" related to responsible AI development. Though not binding, the deleted language promised to share information and research about AI risks, including bias, with the government. Also: Got a suspicious E-ZPass text? It's a trap - how to spot the scam Alongside other big tech companies -- including OpenAI, Google, and Meta -- Anthropic joined the voluntary agreement to self-regulate in July 2023 as part of the Biden administration's AI safety initiatives, many of which were later codified in Biden's AI executive order. The companies committed to certain standards for security testing models before release, watermarking AI-generated content, and developing data privacy infrastructure. Anthropic later agreed to work with the AI Safety Institute (created under that order), to carry out many of the same priorities. However, the Trump administration will likely dissolve the Institute, leaving its initiatives in limbo. Also: The head of US AI safety has stepped down. What now? Anthropic did not publicly announce the removal of the commitment from its site and maintains that its existing stances on responsible AI are unrelated to or predate Biden-era agreements. The move is the latest in a series of public- and private-sector developments around AI -- many of which impact the future of AI safety and regulation -- under the Trump administration. On his first day in office, Trump reversed Biden's executive order and has already fired several AI experts within the government and axed some research funding. These changes appear to have kicked off a tonal shift in several major AI companies, some of which are taking the opportunity to expand their government contracts and work closely with the government to shape a still-unclear AI policy under Trump. Companies like Google are changing already-loose definitions of responsible AI, for example. Overall, the government has lost or is slated to lose much of the already-slim AI regulation created under Biden, and companies ostensibly have even fewer external incentives to place checks on their systems or answer to a third party. Safety checks for bias and discrimination do not appear so far in Trump's communications on AI.
Share
Share
Copy Link
Anthropic, a major AI company, has quietly removed Biden-era AI safety commitments from its website and submitted new policy recommendations to the Trump administration, signaling a significant shift in the AI regulatory landscape.
Anthropic, a major US artificial intelligence company, has quietly removed the voluntary commitments it made towards AI safety during the Biden administration from its website 1. The commitments, which were part of the "White House's Voluntary Commitments for Safe, Secure, and Trustworthy AI," have been deleted from Anthropic's "Transparency Hub" without any public announcement 4.
Following this removal, Anthropic submitted recommendations to the White House for a national AI policy that the company claims will "better prepare America to capture the economic benefits" of AI 3. These recommendations include:
Anthropic has expressed concerns about the potential national security threats posed by powerful AI systems. The company supports the creation of "classified communication channels between AI labs and intelligence agencies" and recommends "expedited security clearances for industry professionals" 1. Anthropic's CEO, Dario Amodei, has previously supported government regulations for potentially threatening AI systems.
The removal of Biden-era commitments and the submission of new recommendations come in the context of significant changes in US AI policy under the Trump administration. President Trump has reversed the Biden administration's executive order on AI and replaced it with one titled "Removing Barriers to American Leadership in Artificial Intelligence" 1.
Anthropic is not alone in adjusting its public policies since Trump took office. Other companies, such as OpenAI, have also made changes to their AI policies, including embracing "intellectual freedom ... no matter how challenging or controversial a topic may be" 4.
These developments raise questions about the future of AI safety and regulation in the United States. The Trump administration's approach appears to prioritize innovation and economic competitiveness over some of the safety and ethical concerns emphasized by the previous administration 5. This shift has led to the potential dissolution of initiatives like the AI Safety Institute and changes in how companies approach responsible AI development.
As the AI landscape continues to evolve rapidly, the balance between innovation, national security, and ethical considerations remains a critical point of discussion in shaping the future of AI policy and regulation.
Reference
[2]
[4]
President Donald Trump signs a new executive order on AI, rescinding Biden-era policies and calling for AI development free from 'ideological bias'. The move sparks debate on innovation versus safety in AI advancement.
44 Sources
44 Sources
The Trump administration revokes Biden's AI executive order, signaling a major shift towards deregulation and market-driven AI development in the US. This move raises concerns about safety, ethics, and international cooperation in AI governance.
4 Sources
4 Sources
Reports of potential layoffs at the US AI Safety Institute have sparked alarm in the tech industry, raising questions about the future of AI regulation and safety measures in the United States.
4 Sources
4 Sources
Government officials and AI experts from multiple countries meet in San Francisco to discuss AI safety measures, while Trump's vow to repeal Biden's AI policies casts uncertainty over future regulations.
8 Sources
8 Sources
Leading AI companies OpenAI and Anthropic have agreed to collaborate with the US AI Safety Institute to enhance AI safety and testing. This partnership aims to promote responsible AI development and address potential risks associated with advanced AI systems.
5 Sources
5 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved