Curated by THEOUTPOST
On Thu, 1 Aug, 8:01 AM UTC
3 Sources
[1]
ChapGPT-maker OpenAI partners with AI Safety Institute
OpenAI has partnered with the U.S. AI Safety Institute, a federal government body, to give them early access to the company's next foundational model for safety testing. In a post on X, CEO Sam Altman noted that their goal was to "work together to push forward to science of AI evaluations." Earlier this year in May, the ChatGPT-maker had disbanded their Superalignment team which was started with the need to ensure that their AI products align with human intentions and prevent them from going "rogue." The move led to resignations from the team's leads Jan Leike who went on to join rival Anthropic AI's safety research team and co-founder Ilya Sutskever, who then started his own safety-focused AI startup called Safe Superintelligence Inc. After his departure, Leike had indicated in a post that his team hadn't been given the compute promised as the focus was entirely on launching new products. In the same post Altman made announcing the partnership with the U.S. AI Safety Institute, he emphasised that they would stick to the portion of 20% compute allocated for safety efforts. (For top technology news of the day, subscribe to our tech newsletter Today's Cache) In June, retired U.S. army general Paul M. Nakasone joined the AI firm's board of directors. In the blog post making the announcement, it was said that, "Nakasone's appointment reflects OpenAI's commitment to safety and security, and underscores the growing significance of cybersecurity as the impact of AI technology continues to grow." Altman also confirmed that they will be removing the indefinite non-disparagement clause from employee contracts that had received criticism for its severity. "We want current and former employees to be able to raise concerns and feel comfortable doing so," he noted. Read Comments
[2]
OpenAI pledges to give U.S. AI Safety Institute early access to its next model | TechCrunch
OpenAI CEO Sam Altman says that OpenAI is working with the U.S. AI Safety Institute, a federal government body that aims to assess and address risks in AI platforms, on an agreement to provide early access to its next major generative AI model for safety testing. The announcement, which Altman made in a post on X late Thursday evening, was light on details. But it -- along with a similar deal with the U.K.'s AI safety body struck in June -- appears to be intended to counter the narrative that OpenAI has deprioritized work on AI safety in the pursuit of more capable and powerful AI technologies. In May, OpenAI effectively disbanded a unit working on the problem of developing controls to prevent "superintelligent" AI systems from going rogue. Reporting -- including ours -- suggested that OpenAI cast aside the team's safety research in favor of launching new products, ultimately leading to the resignation of the team's two co-leads, Jan Leike (who now leads safety research at AI startup Anthropic) and OpenAI co-founder Ilya Sutskever (who started his own safety-focused AI company, Safe Superintelligence Inc.). In response to a growing chorus of critics, OpenAI said it would eliminate its restrictive non-disparagement clauses that implicitly discouraged whistleblowing and create a safety commission, as well as dedicate 20% of its compute to safety research. (The disbanded safety team had been promised 20% of OpenAI's compute for its work, but ultimately never received this.) Altman re-committed to the 20% pledge and re-affirmed that OpenAI voided the non-disparagement terms for new and existing staff in May. The moves did little to placate some observers, however -- particularly after OpenAI staffed the safety commission will all company insiders including Altman and, more recently, reassigned a top AI safety executive to another org. Five senators including Brian Schatz, a Democrat from Hawaii, raised questions about OpenAI's policies in a recent letter addressed to Altman. OpenAI chief strategy officer Jason Kwon responded to the letter today, writing the OpenAI "[is] dedicated to implementing rigorous safety protocols at every stage of our process." The timing of OpenAI's agreement with the U.S. AI Safety Institute seems a tad suspect in light of the company's endorsement earlier this week of the Future of Innovation Act, a proposed Senate bill that would authorize the Safety Institute as an executive body that sets standards and guidelines for AI models. The moves together could be perceived as an attempt at regulatory capture -- or at the very least an exertion of influence from OpenAI over AI policymaking at the federal level. Not for nothing, Altman is among the U.S. Department of Homeland Security's Artificial Intelligence Safety and Security Board, which provides recommendations for the "safe and secure development and deployment of AI" throughout the U.S.' critical infrastructures. And OpenAI has dramatically increased its expenditures on federal lobbying this year, spending $800,000 in the first six months of 2024 versus $260,000 in all of 2023. The U.S. AI Safety Institute, housed within the Commerce Department's National Institute of Standards and Technology, consults with a consortium of companies that includes Anthropic as well as big tech firms like Google, Microsoft, Meta, Apple, Amazon and Nvidia. The industry group is tasked with working on actions outlined in President Joe Biden's October AI executive order, including developing guidelines for AI red-teaming, capability evaluations, risk management, safety and security and watermarking synthetic content.
[3]
OpenAI Says 'Dedicated' to Safety in Letter to US Lawmakers
OpenAI, responding to questions from US lawmakers, said it's dedicated to making sure its powerful AI tools don't cause harm, and that employees have ways to raise concerns about safety practices. The startup sought to reassure lawmakers of its commitment to safety after five senators including Senator Brian Schatz, a Democrat from Hawaii, raised questions about OpenAI's policies in a letter addressed to Chief Executive Officer Sam Altman.
Share
Share
Copy Link
OpenAI, the creator of ChatGPT, has announced a partnership with the U.S. AI Safety Institute. The company commits to providing early access to its future AI models and emphasizes its dedication to AI safety in a letter to U.S. lawmakers.
OpenAI, the company behind the popular ChatGPT, has announced a significant partnership with the U.S. AI Safety Institute. This collaboration marks a crucial step in ensuring the responsible development and deployment of artificial intelligence technologies 1.
As part of this partnership, OpenAI has pledged to provide the institute with early access to its future AI models. This move is aimed at allowing thorough testing and evaluation of these models before their public release, potentially setting a new standard for AI safety practices in the industry 2.
In a letter addressed to U.S. lawmakers, OpenAI has reaffirmed its dedication to AI safety. The company emphasized its commitment to developing AI systems that are not only innovative but also safe and beneficial to society 3.
This communication comes at a time when there is increasing scrutiny of AI technologies and their potential impacts. OpenAI's proactive approach in engaging with regulatory bodies and safety institutions demonstrates its recognition of the importance of responsible AI development.
The partnership between OpenAI and the U.S. AI Safety Institute is expected to have far-reaching implications for the future of AI development. By providing early access to its models, OpenAI is enabling more comprehensive safety assessments, which could help identify and mitigate potential risks before these models are made available to the public 2.
This collaboration may also inspire other AI companies to adopt similar practices, potentially leading to a more standardized approach to AI safety across the industry. It represents a significant step towards balancing rapid technological advancement with necessary safety precautions.
OpenAI's partnership with the U.S. AI Safety Institute and its communication with lawmakers reflect the evolving regulatory landscape surrounding AI technologies. As governments worldwide grapple with how to regulate AI, proactive measures by industry leaders like OpenAI could help shape future policies and regulations 3.
The tech industry's response to this development will be crucial. If other major AI companies follow suit, it could lead to a more collaborative approach to AI safety, with increased transparency and cooperation between developers, safety institutions, and regulatory bodies.
Reference
[2]
[3]
Leading AI companies OpenAI and Anthropic have agreed to collaborate with the US AI Safety Institute to enhance AI safety and testing. This partnership aims to promote responsible AI development and address potential risks associated with advanced AI systems.
5 Sources
OpenAI has announced the creation of a new independent board to oversee the safety and ethical implications of its AI technologies. This move comes as the company aims to address growing concerns about AI development and its potential risks.
15 Sources
U.S. Senators are pressing OpenAI CEO Sam Altman for transparency on AI safety measures following whistleblower complaints. The demand comes as lawmakers seek to address potential risks associated with advanced AI systems.
4 Sources
OpenAI, the leading AI research company, experiences a significant data breach. Simultaneously, the company faces accusations of breaking its promise to allow independent testing of its AI models.
2 Sources
OpenAI, the artificial intelligence research laboratory, has reassigned its AI safety leader, Aleksander Madry, to a new project. This move is part of a broader reorganization within the company, signaling potential shifts in its approach to AI safety and development.
7 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved