Curated by THEOUTPOST
On Wed, 24 Jul, 12:03 AM UTC
4 Sources
[1]
Senators demand ChatGPT-maker OpenAI turn over safety data
In a letter to OpenAI CEO Sam Altman, the five lawmakers asked a series of questions on how the company is working to ensure AI cannot be misused to provide potentially harmful information -- such as giving instructions on how to build weapons or assisting in the coding of malware -- to members of the public. In addition, the group sought assurances that employees who raise potential safety issues would not be silenced or punished. The concerns voiced by former employees have led to a flurry of media reports -- and the senators expressed concerns about how the company is addressing safety concerns. "We seek additional information from OpenAI about the steps that the company is taking to meet its public commitments on safety, how the company is internally evaluating its progress on those commitments, and on the company's identification and mitigation of cybersecurity threats," the five senators wrote in a letter obtained by The Washington Post.
[2]
Senators press OpenAI over safety concerns after whistleblower complaint
Several senators pressed OpenAI for answers Monday about its safety and employment practices after a group of whistleblowers filed a complaint alleging the company blocked staff from warning regulators about the risks of its artificial intelligence (AI) technology. Led by Sen. Brian Schatz (D-Hawaii), the group of mostly Democratic senators asked OpenAI CEO Sam Altman about the AI startup's public commitments to safety, as well as its treatment of current and former employees who voice concerns. "Given OpenAI's position as a leading AI company, it is important that the public can trust in the safety and security of its systems," Schatz, alongside Sens. Ben Ray Lujan (D-N.M.), Peter Welch (D-Vt.), Mark Warner (D-Va.) and Angus King (I-Maine), wrote in Monday's letter. "This includes the integrity of the company's governance structure and safety testing, its employment practices, its fidelity to its public promises and mission, and its cybersecurity policies," they continued. The startup behind the popular AI chatbot tool ChatGPT has come under increased scrutiny after The Washington Post obtained a complaint filed by several whistleblowers with the Securities and Exchange Commission (SEC) earlier this month. The whistleblowers alleged that OpenAI gave its employees restrictive employment, severance and nondisclosure agreements that required them to waive their federal rights to whistleblower compensation and penalized them for raising concerns with regulators. "Given the risks associated with the advancement of AI, there is an urgent need to ensure that employees working on this technology understand that they can raise complaints or address concerns to federal regulatory or law enforcement authorities," the whistleblowers wrote in their complaint. In Monday's letter to Atlman, the senators asked the OpenAI CEO to confirm that the company will not enforce permanent non-disparagement agreements for its employees and to commit to removing any other provision that could be used to penalize employees for publicly speaking out. "If not, please explain why, and any internal protections in place to ensure that these provisions are not used to financially disincentivize whistleblowers," they added.
[3]
Senators Press Sam Altman For Safety, Transparency At OpenAI: Vulnerable AI Is 'Not Acceptable'
The strongly worded letter emphasizes the importance of research and pro-safety employment practices. Former and current employees at ChatGPT's parent company, OpenAI, have expressed concern that safety is not being taken seriously enough at the now-for-profit organization. Such mistrust has also penetrated the highest levels of government. What Happened: Five senators -- Brian Schatz (D-HI), Peter Welch (D-VT), Ben Ray Luján (D-NM), Mark Warner (D-VA) and Angus King (I-ME) -- sent a letter to OpenAI co-founder and CEO Sam Altman on Monday. The letter, obtained by the Washington Post, urges Altman to take AI safety seriously and seeks additional information on OpenAI's safety plans. The senators expressed concern at OpenAI's employment practices and retaliatory measures against whistleblowing. The authors of the letter also want to ensure safety given OpenAI's work with the U.S. government. "National and economic security are among the most important responsibilities of the United States Government, and unsecure or otherwise vulnerable AI systems are not acceptable," the letter reads. "Given OpenAI's position as a leading AI company, it is important that the public can trust in the safety and security of its systems." The letter proceeds to request information on several OpenAI practices. The senators prompt Altman to confirm that OpenAI will honor its previous commitment to dedicate 20% of its computing resources to research on AI safety. The letter also asks OpenAI to not enforce permanent employment non-disparagement agreements and install protocols for employees to voice safety concerns. Why it Matters: Many experts, including co-founder Elon Musk, have voiced concern over AI safety and its ultimate effects on humanity. Musk is no longer associated with OpenAI. Altman's ouster and subsequent reinstallment at OpenAI in 2023 was not linked to safety, but a divide between employees at the organization exists on weighing profits with safety. Chief Scientist Ilya Sutskever left OpenAI in May. Now Read: Sam Altman Stands Vindicated? OpenAI-Sponsored 'No-Strings Payment' Study Quells Fears About Misuse Of Money, Making Strong Case For Universal Basic Income When AI Makes Jobs Obsolete Market News and Data brought to you by Benzinga APIs
[4]
OpenAI shares safety updates after whistleblower complaints, lawmaker demands
In an X thread, the AI giant explains 'how we're prioritizing safety in our work.' Here's what's new. Safety is one of the biggest concerns regarding the rapid growth of generative artificial intelligence models. A seven-page complaint filed by whistleblowers and obtained by The Washington Post regarding OpenAI's safety practices has only heightened these apprehensions. As a result, OpenAI is now sharing an update on its safety initiatives with the public. Also: Fight for global AI lead may boil down to governance On Tuesday, OpenAI took to X, formerly Twitter, to share a roundup of updates on "how we're prioritizing safety in our work." The thread included highlights of recent initiatives and updates of current projects. OpenAI began by highlighting its Preparedness Framework, first released in beta in December. The Framework includes various precautions OpenAI takes to ensure the safety of its frontier AI models, such as giving model scorecards on different metrics and not releasing a new model if it crosses a "medium" risk threshold. In tandem, OpenAI shared that it is actively developing "levels" to help OpenAI and stakeholders categorize and track AI progress, which the company will share more details about "soon." OpenAI also offered updates on its Safety and Security Committee, initially launched by OpenAI's board of directors in May to add another layer of checks and balances to its operations. The review conducted by the committee, which includes technical and policy experts, is now underway. Once it concludes, OpenAI will share further steps it plans to take. Lastly, OpenAI brought attention to its whistleblower policy, which "protects employees' rights to make protected disclosures," according to the company. To promote conversations regarding the technology, the company also changed its employees' departure process, removing non-disparagement terms. Also: Apple accelerates AI efforts: Here's what its new models can do These updates were shared just one day after US lawmakers demanded OpenAI share data regarding its safety practices following the whistleblower report. Some of the report's major points included that OpenAI prevented staff from alerting proper authorities regarding technology risks and made employees waive their federal rights to whistleblower compensation, both of which were addressed in the X post.
Share
Share
Copy Link
U.S. Senators are pressing OpenAI CEO Sam Altman for transparency on AI safety measures following whistleblower complaints. The demand comes as lawmakers seek to address potential risks associated with advanced AI systems.
In a significant development at the intersection of technology and policy, U.S. Senators Richard Blumenthal and Josh Hawley have demanded that OpenAI CEO Sam Altman provide detailed information about the company's artificial intelligence (AI) safety measures. This request comes in the wake of whistleblower complaints and growing concerns about the potential risks associated with advanced AI systems 1.
The senators' action was prompted by allegations from a whistleblower who claimed that OpenAI had knowingly trained its AI models on sensitive personal information, potentially compromising user privacy and safety. These claims have raised alarms about the ethical implications and potential vulnerabilities of AI systems developed by leading tech companies 2.
In their letter to Altman, Senators Blumenthal and Hawley outlined several key areas of concern:
The senators emphasized that "vulnerable AI is not acceptable" and insisted on a comprehensive response from OpenAI within a two-week timeframe 3.
In response to the mounting pressure, OpenAI has recently shared updates on its safety measures. The company claims to have implemented new safeguards, including enhanced monitoring systems and improved content filtering mechanisms. These updates are aimed at addressing concerns about the potential misuse of AI technologies and ensuring the responsible development of advanced AI systems 4.
This development highlights the growing scrutiny faced by AI companies and the increasing calls for regulatory oversight in the rapidly evolving field of artificial intelligence. As AI technologies become more sophisticated and integrated into various aspects of society, policymakers are grappling with the challenge of balancing innovation with safety and ethical considerations.
The senators' demand for transparency from OpenAI could set a precedent for how government bodies interact with AI companies, potentially paving the way for more stringent reporting requirements and safety standards across the industry. As the dialogue between tech leaders and lawmakers continues, the outcome of this inquiry may have far-reaching implications for the future of AI development and regulation.
Reference
[1]
[3]
OpenAI, the creator of ChatGPT, has announced a partnership with the U.S. AI Safety Institute. The company commits to providing early access to its future AI models and emphasizes its dedication to AI safety in a letter to U.S. lawmakers.
3 Sources
3 Sources
Whistleblowers have urged the U.S. Securities and Exchange Commission to investigate OpenAI's non-disclosure agreements, alleging they hinder employees from reporting potential risks associated with artificial intelligence development.
21 Sources
21 Sources
A group of whistleblowers has urged the U.S. Securities and Exchange Commission to investigate OpenAI's non-disclosure agreements, claiming they may violate federal whistleblower protection laws. The AI company faces scrutiny over its practices and transparency.
17 Sources
17 Sources
OpenAI, the leading AI research company, experiences a significant data breach. Simultaneously, the company faces accusations of breaking its promise to allow independent testing of its AI models.
2 Sources
2 Sources
OpenAI has announced the creation of a new independent board to oversee the safety and ethical implications of its AI technologies. This move comes as the company aims to address growing concerns about AI development and its potential risks.
15 Sources
15 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved