Curated by THEOUTPOST
On Thu, 29 Aug, 4:05 PM UTC
5 Sources
[1]
US AI Safety Institute will have access to OpenAI and Anthropic for a safer AI-led future - SiliconANGLE
US AI Safety Institute will have access to OpenAI and Anthropic for a safer AI-led future OpenAI and Anthropic PBC today announced they have agreed to share AI models before and after release with the U.S. government's AI Safety Institute. The institute, housed at the U.S. Department of Commerce's National Institute of Standards and Technology, NIST, was set up through an executive order by President Biden in 2023. Working with a consortium of companies and experts, the onus is on establishing safety guidelines and good practices while evaluating potentially dangerous AI systems. The companies today explained that the institute will have early access to anything they create as well as access once the product is on the market. The institute will also provide feedback to its counterpart in the U.K. "Safety is essential to fueling breakthrough technological innovation," said Elizabeth Kelly, director of the U.S. AI Safety Institute. "With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety. These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI." The existential threat AI could pose to humanity was a hot-button topic long before this recent surge in the use of generative AI. The rationale has mostly been to proceed with care, but at the same time not get bogged down by AI panics. In an open letter in June, signed by a group of current and former researchers from OpenAI, Alphabet Inc.'s Google DeepMind research group, and Anthropic, the signatories asked that there be more transparency and oversight to protect the public from potentially harmful AI products. The signatories warned that "strong financial incentives" could mean a lack of "effective oversight." They added the leading AI companies in the U.S. "have only weak obligations to share some of this information with governments, and none with civil society," and it's likely they won't "share it voluntarily." The announcement today might placate the concerned group. OpenAI's Sam Altman wrote on X that it's "important that this happens at the national level," adding that the "U.S. needs to continue to lead!" "This strengthens our ability to identify and mitigate risks, advancing responsible AI development," said Anthropic's co-founder and Head of Policy Jack Clark, in a statement to the media. "We're proud to contribute to this vital work, setting new benchmarks for safe and trustworthy AI."
[2]
OpenAI and Anthropic Open Up to US AI Safety Institute - Decrypt
The U.S. Department of Commerce's National Institute of Standards and Technology (NIST) has made a deal with top AI developers OpenAI and Anthropic to establish formal collaboration with the U.S. AI Safety Institute (AISI), the agency said on Thursday. The institute would "receive access to major new models from each company prior to and following their public release," according to the announcement. Anthropic is developing Claude, while OpenAI offers ChatGPT. The arrangement will allow the Institute to evaluate the capabilities and safety risks of the respective AI models. "Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety," agency director Elizabeth Kelly said in a statement. "These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI." Anthropic and OpenAI did not immediately respond to requests for comment by Decrypt. "Looking forward to doing a pre-deployment test on our next model with the U.S. AISI," Anthropic co-founder Jack Clark wrote on Twitter. "Third-party testing is a really important part of the AI ecosystem and it's been amazing to see governments stand up safety institutes to facilitate this." "We are happy to have reached an agreement with the U.S. AI Safety Institute for pre-release testing of our future models," OpenAI co-founder and CEO Sam Altman wrote on Twitter. "For many reasons, we think it's important that this happens at the national level. [The] U.S. needs to continue to lead!" The issue of AI safety has permeated every level of the industry, with many leading experts and executives leaving OpenAI over concerns about its practices -- in some cases forming rival companies centered around cautious development. Governments are also concerned. Launched in October 2023 by the Biden Administration the AISI formed after President Biden issued a sweeping Executive Order aimed at reigning in artificial intelligence development. In February, the Biden Administration announced the first members of what it called the AISI Consortium (AISIC). The AISIC included several high-profile AI firms, including OpenAI, Anthropic, Google, Apple, NVIDIA, Microsoft, and Amazon. The U.S. AISI said it will share its OpenAI and Anthropic findings with its European counterparts at the U.K. AI Safety Institute.
[3]
OpenAI, Anthropic Agree to Work With US Institute on Safety Testing
The US government announced agreements with leading artificial intelligence startups OpenAI and Anthropic to help test and evaluate their upcoming technologies for safety. Under the agreements, announced on Thursday, the US AI Safety Institute will receive early access to major new AI models from the companies to evaluate capabilities and risks as well as collaborate on methods to mitigate potential issues. The AI Safety Institute is part of the Commerce Department's National Institute of Standards and Technology, or NIST. The agreements come at a time when there's been an increasing push to mitigate potentially catastrophic risks of AI through regulation, such as the controversial California AI safety bill SB 1047, which recently passed the state Assembly.
[4]
OpenAI and Anthropic agree to share models with U.S. government safety institute
Driving the news: Anthropic and OpenAI have each signed a memorandum of understanding to allow formal collaboration with the U.S. Artificial Intelligence Safety Institute, a part of the Commerce Department's National Institute of Standards and Technology. Between the lines: The U.S. AI Safety Institute was set up as part of President Biden's AI executive order. What they're saying: "Safety is essential to fueling breakthrough technological innovation," U.S. AI Safety Institute director Elizabeth Kelly said in a statement. "These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI."
[5]
OpenAI, Anthropic sign deals with US govt for AI research and testing
(Reuters) - AI startups OpenAI and Anthropic have signed deals with the United States government for research, testing and evaluation of their artificial intelligence models, the U.S. Artificial Intelligence Safety Institute said on Thursday. The first-of-their-kind agreements come at a time when the companies are facing regulatory scrutiny over safe and ethical use of AI technologies. California legislators are set to vote on a bill as soon as this week to broadly regulate how AI is developed and deployed in the state. Under the deals, the U.S. AI Safety Institute will have access to major new models from both OpenAI and Anthropic prior to and following their public release. The agreements will also enable collaborative research to evaluate capabilities of the AI models and risks associated with them. "We believe the institute has a critical role to play in defining U.S. leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on," said Jason Kwon, chief strategy officer at ChatGPT maker OpenAI. Anthropic, which is backed by Amazon and Alphabet, did not immediately respond to a Reuters request for comment. "These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI," said Elizabeth Kelly, director of the U.S. AI Safety Institute. The institute, a part of the U.S. commerce department's National Institute of Standards and Technology (NIST), will also collaborate with the U.K. AI Safety Institute and provide feedback to the companies on potential safety improvements. The U.S. AI Safety Institute was launched last year as part of an executive order by President Joe Biden's administration to evaluate known and emerging risks of artificial intelligence models. (Reporting by Harshita Mary Varghese in Bengaluru; Editing by Shinjini Ganguli)
Share
Share
Copy Link
Leading AI companies OpenAI and Anthropic have agreed to collaborate with the US AI Safety Institute to enhance AI safety and testing. This partnership aims to promote responsible AI development and address potential risks associated with advanced AI systems.
In a significant move towards ensuring the responsible development of artificial intelligence, leading AI companies OpenAI and Anthropic have entered into agreements with the US government to collaborate with the newly established US AI Safety Institute 1. This partnership marks a crucial step in addressing the potential risks associated with advanced AI systems and promoting a safer AI-led future.
As part of the collaboration, OpenAI and Anthropic have agreed to provide the US AI Safety Institute with access to their most advanced AI models 2. This unprecedented level of access will enable government researchers to conduct thorough safety testing and evaluation of these cutting-edge AI systems. The move is seen as a significant step towards transparency and cooperation between the private sector and government agencies in the field of AI development.
The partnership will focus on various aspects of AI safety, including the development of new testing methodologies, risk assessment frameworks, and safety standards 3. By leveraging the expertise of both the private sector and government researchers, the collaboration aims to identify potential vulnerabilities and develop robust safeguards for AI systems.
This initiative aligns with the Biden administration's broader strategy to promote responsible AI development and maintain US leadership in the field 4. The collaboration between OpenAI, Anthropic, and the US AI Safety Institute demonstrates a commitment to addressing the complex challenges posed by advanced AI technologies through a coordinated effort between industry leaders and government bodies.
The partnership between OpenAI, Anthropic, and the US government is expected to have far-reaching implications for the AI industry as a whole 5. By setting a precedent for collaboration between private companies and government agencies, this initiative may encourage other AI firms to participate in similar safety-focused partnerships. This could lead to the development of industry-wide standards and best practices for AI safety and ethics.
As the collaboration between OpenAI, Anthropic, and the US AI Safety Institute progresses, it is expected to yield valuable insights into AI safety and contribute to the development of more robust and reliable AI systems. The outcomes of this partnership may influence future regulatory frameworks and shape the trajectory of AI development on a global scale.
Reference
[1]
[3]
[5]
OpenAI, the creator of ChatGPT, has announced a partnership with the U.S. AI Safety Institute. The company commits to providing early access to its future AI models and emphasizes its dedication to AI safety in a letter to U.S. lawmakers.
3 Sources
Anthropic, Palantir, and AWS collaborate to integrate Claude AI models into US government intelligence and defense operations, raising questions about AI ethics and national security.
15 Sources
A coalition of over 60 tech companies, nonprofits, and academic institutions are calling on Congress to pass legislation authorizing the U.S. AI Safety Institute within NIST before the end of 2024, citing concerns about national competitiveness and AI safety.
4 Sources
Anthropic has updated its Responsible Scaling Policy, introducing new protocols and governance measures to ensure the safe development and deployment of increasingly powerful AI models.
2 Sources
OpenAI has announced the creation of a new independent board to oversee the safety and ethical implications of its AI technologies. This move comes as the company aims to address growing concerns about AI development and its potential risks.
15 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved