Curated by THEOUTPOST
On Wed, 17 Jul, 4:02 PM UTC
2 Sources
[1]
Read the letter OpenAI whistleblowers sent to the SEC calling for action on NDAs
This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now. Have an account? Log in. The letter states that the whistleblowers provided documents to the SEC supporting their claims that OpenAI's NDAs "violated numerous precedents of the SEC." Sen. Grassley said in a statement shared with Business Insider that assessing the threats posed by AI fell under Congress's constitutional responsibility to protect national security. He added: "OpenAI's policies and practices appear to cast a chilling effect on whistleblowers' right to speak up and receive due compensation for their protected disclosures. In order for the federal government to stay one step ahead of artificial intelligence, OpenAI's nondisclosure agreements must change." OpenAI didn't respond to a request for comment from BI. An SEC representative said: "The SEC does not comment on the existence or nonexistence of a possible whistleblower submission." The whistleblowers' complaint comes after Vox reported in May that OpenAI could take back vested equity from departing employees if they did not sign non-disparagement agreements. Sam Altman said on X shortly after the report was published that he "did not know this was happening." Nine former and current OpenAI employees signed an open letter in June calling on major AI firms to ensure greater transparency and better protections for whistleblowers. William Saunders, a former OpenAI employee who quit earlier this year after losing confidence that the company could responsibly mitigate AI risks, previously told BI about what led to him signing the letter and speaking out. He said an incident in which another former OpenAI employee, Leopold Aschenbrenner, was fired and the requirement that OpenAI staff sign NDAs led to the four principles set out in the June open letter.
[2]
Whistleblowers Accuse OpenAI of Illegally Restrictive Contracts
OpenAI whistleblowers wrote to the US Securities and Exchange Commission (SEC), alleging that the AI startup enforced stringent employment and nondisclosure contracts on its workers, possibly resulting in penalties against staff who raise concerns about AI safety, The Washington Post reported. OpenAI allegedly made staff sign employee agreements that required them to waive their federal rights to whistleblower compensation, while also requiring prior consent from the company if employees wished to disclose information to authorities. The letter urged the SEC to take action against OpenAI for potentially illegal nondisclosure agreements. It also called for the company to disclose all relevant agreements, inform employees of violations and their right to report them, and face fines for each improper agreement. Senior staff at the company have raised several concerns in the past about OpenAI's safety procedures and lack of protection for whistleblowers. Last month, a group of former and current employees at frontier AI companies like OpenAI and Google wrote an open letter asking for greater whistleblower protections for those who raise the alarm about the dangers of AI. These dangers range from further entrenchment of existing inequalities to manipulation and misinformation to uncontrolled AI systems resulting in human extinction. The letter asked AI companies to refrain from enforcing restrictive employment contracts and facilitate an anonymous system for raising safety concerns. In the same vein, last month a former OpenAI employee claimed he was fired for raising security concerns to the company's board. He alleged that the company's management was displeased with an internal memo he wrote on OpenAI's security policies, which he felt were "egregiously insufficient" in protecting model weights or key algorithmic secrets from theft by foreign actors. Earlier this year, the AI startup saw several key employees working on AI safety concerns like Ilya Sutskever and Jan Leike resign, with the latter accusing OpenAI of prioritising "shiny" products over safety culture and processes.
Share
Share
Copy Link
Former OpenAI employees have filed a complaint with the SEC, claiming the company uses illegal non-disclosure agreements to suppress information. The whistleblowers argue these NDAs violate federal whistleblower protection laws.
In a significant development in the artificial intelligence industry, former employees of OpenAI have taken a bold step by filing a complaint with the U.S. Securities and Exchange Commission (SEC). The whistleblowers allege that OpenAI, a leading AI research company, has been using illegal non-disclosure agreements (NDAs) to suppress information and potentially violate federal whistleblower protection laws 1.
The complaint, filed by an undisclosed number of former OpenAI employees, centers around the company's use of NDAs. According to the whistleblowers, these agreements are designed to prevent employees from sharing information about potential wrongdoing or illegal activities within the company. This practice, they argue, is in direct violation of federal laws that protect whistleblowers and ensure transparency in corporate operations 2.
The use of NDAs to silence employees is not a new phenomenon in the tech industry. However, the allegations against OpenAI are particularly concerning given the company's influential position in the AI field. If proven true, these practices could have far-reaching consequences for OpenAI and potentially set a precedent for how NDAs are used across the tech sector 1.
By bringing their complaint to the SEC, the whistleblowers have elevated this issue to a federal level. The SEC has the authority to investigate such claims and, if necessary, take enforcement action against companies found to be in violation of securities laws or regulations protecting whistleblowers 2.
This complaint comes at a crucial time for OpenAI, as the company continues to make headlines with its groundbreaking AI technologies. The allegations, if substantiated, could potentially damage OpenAI's reputation and lead to legal consequences. It may also prompt a broader discussion about corporate transparency and employee rights in the rapidly evolving AI industry 1.
The OpenAI case may have ripple effects throughout the tech industry. Other companies may need to reassess their own NDA practices to ensure compliance with whistleblower protection laws. This could lead to increased scrutiny of corporate policies and potentially result in more transparent and ethical practices across the sector 2.
As the SEC reviews the complaint, all eyes will be on OpenAI and its response to these serious allegations. The outcome of this case could set important precedents for how tech companies handle sensitive information and treat potential whistleblowers. It may also lead to broader discussions about the balance between corporate secrecy and the public's right to know about potential wrongdoing in influential tech companies 1.
Reference
[1]
Whistleblowers have urged the U.S. Securities and Exchange Commission to investigate OpenAI's non-disclosure agreements, alleging they hinder employees from reporting potential risks associated with artificial intelligence development.
21 Sources
21 Sources
A group of whistleblowers has urged the U.S. Securities and Exchange Commission to investigate OpenAI's non-disclosure agreements, claiming they may violate federal whistleblower protection laws. The AI company faces scrutiny over its practices and transparency.
17 Sources
17 Sources
U.S. Senators are pressing OpenAI CEO Sam Altman for transparency on AI safety measures following whistleblower complaints. The demand comes as lawmakers seek to address potential risks associated with advanced AI systems.
4 Sources
4 Sources
OpenAI, the leading AI research company, experiences a significant data breach. Simultaneously, the company faces accusations of breaking its promise to allow independent testing of its AI models.
2 Sources
2 Sources
OpenAI, the creator of ChatGPT, has announced a partnership with the U.S. AI Safety Institute. The company commits to providing early access to its future AI models and emphasizes its dedication to AI safety in a letter to U.S. lawmakers.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved