5 Sources
5 Sources
[1]
OpenAI thinks Elon Musk funded its biggest critics -- who also hate Musk
Over the past week, OpenAI has faced backlash over subpoenas it sent to nonprofits accused of conspiring with Elon Musk to amplify public criticism of OpenAI as it sought to shift from a nonprofit to for-profit structure. The subpoenas are supposed to support OpenAI's defense in a lawsuit Musk's X Corp filed to block the for-profit transition. Seeking a "wide variety of documents" -- including a sweeping request for all communications regarding Musk and all information on nonprofits' funders and donations -- OpenAI claimed that the subpoenas are intended to probe if Musk was involved in the actions or paid nonprofits to make critical comments, NBC News wrote in a report exhaustively documenting the controversy. But nonprofits have alleged it's obvious that OpenAI is using the lawsuit to harass, silence, and intimidate its critics -- most glaringly when it comes to targeting nonprofits that are even more publicly critical of Musk's companies than they are of OpenAI. Emma Ruby-Sachs, executive director for Ekō -- a nonprofit that serves as a global consumer watchdog holding the "biggest companies in the world accountable" -- told NBC News that "the logical basis" for sending the subpoena "is so ridiculous that we have to assume this is just a tactic to scare us and get us to back off." Ruby-Sachs noted that Ekō called for Musk to be fired as the head of DOGE earlier this year. Running a billboard in Times Square that showed a grinning Musk donning a crown, Ekō urged passersby to contact Congress if they "don't want a king." Further, Ekō had corresponded with OpenAI prior to receiving the subpoena, confirming that "we're over 70 percent funded by small online donations from individuals, and we've run multiple campaigns against Elon Musk in the last year," Ruby-Sachs said. "We are not in any way supported by or funded by Elon Musk and have a history of campaigning against him and his interests," Ruby-Sachs told NBC News. Another nonprofit watchdog targeted by OpenAI was The Midas Project, which strives to make sure AI benefits everyone. Notably, Musk's lawsuit accused OpenAI of abandoning its mission to benefit humanity in pursuit of immense profits. But the founder of The Midas Project, Tyler Johnston, was shocked to see his group portrayed as coordinating with Musk. He posted on X to clarify that Musk had nothing to do with the group's "OpenAI Files," which comprehensively document areas of concern with any plan to shift away from nonprofit governance. His post came after OpenAI's chief strategy officer, Jason Kwon, wrote that "several organizations, some of them suddenly newly formed like the Midas Project, joined in and ran campaigns" backing Musk's "opposition to OpenAI's restructure." "What are you talking about?" Johnston wrote. "We were formed 19 months ago. We've never spoken with or taken funding from Musk and [his] ilk, which we would have been happy to tell you if you asked a single time. In fact, we've said he runs xAI so horridly it makes OpenAI 'saintly in comparison.'" OpenAI acting like a "cutthroat" corporation? Johnston complained that OpenAI's subpoena had already hurt the Midas Project, as insurers had denied coverage based on news coverage. He accused OpenAI of not just trying to silence critics but possibly shut them down. "If you wanted to constrain an org's speech, intimidation would be one strategy, but making them uninsurable is another, and maybe that's what's happened to us with this subpoena," Johnston suggested. Other nonprofits, like the San Francisco Foundation (SFF) and Encode, accused OpenAI of using subpoenas to potentially block or slow down legal interventions. Judith Bell, SFF's chief impact officer, told NBC News that her nonprofit's subpoena came after spearheading a petition to California's attorney general to block OpenAI's restructuring. And Encode's general counsel, Nathan Calvin, was subpoenaed after sponsoring a California safety regulation meant to make it easier to monitor risks of frontier AI. Unlike many of the targeted groups, Encode filed an amicus brief supporting Musk in the OpenAI lawsuit, with Calvin arguing that "OpenAI was founded as a nonprofit in order to protect" their commitment to building AI that benefits the public, "and the public interest requires they keep their word." On X, Kwon said Encode had inserted itself in the lawsuit by filing the brief, claiming, "We issued a subpoena to ensure transparency around their involvement and funding. This is a routine step in litigation." But Calvin alleged that OpenAI's subpoena has little to do with the brief and more to do with Encode's advocacy, NBC News reported. "I believe OpenAI used the pretext of their lawsuit against Elon Musk to intimidate their critics and imply that Elon is behind all of them," Calvin said. While nonprofits raged over OpenAI's alleged intimidation tactics, litigator Ray Seilie told NBC News that OpenAI's request to subpoena Calvin could have been even more demanding. "If OpenAI had wanted to intimidate or harass him, they could have served him with a deposition subpoena, which would have required Calvin to sit down for a full day of questioning under oath by OpenAI's lawyers in addition to providing documents," Seilie said. "The fact that OpenAI only asked for documents suggests that they were sincerely looking for connections between Musk and Encode, even if they turned out to be wrong about their suspicion." But Robert Weissman, co-president of a consumer advocacy group not yet targeted by OpenAI, Public Citizen, told NBC News that the subpoenas appear to seek private information from some of OpenAI's loudest critics to "chill speech and deter them from speaking out." The overly broad requests seem overtly shady, Weissman said, like "the kind of tactic you would expect from the most cutthroat for-profit corporation." "This behavior is highly unusual," Weissman said. "It's 100 percent intended to intimidate." OpenAI faces criticism on subpoenas from within Some current and former OpenAI employees were bothered enough to speak out about the subpoenas, including Joshua Achiam, OpenAI's head of mission alignment. "This doesn't seem great," he wrote, sharing "thoughts" with the public and noting "all views are my own." "Elon is certainly out to get us, and the man has got an extensive reach," Achiam said. "But there is so much that is public that we can fight him on." While claiming he would not be at OpenAI if the company didn't have "an extremely sincere commitment to good," he acknowledged that "we can't be doing things that make us into a frightening power instead of a virtuous one. We have a duty to and a mission for all of humanity. The bar to pursue that duty is remarkably high." To Achiam, the main takeaway of the controversy -- which he ranks below OpenAI's prior scandal silencing employees with a non-disparagement clause -- is that OpenAI's public trust depends on the company receiving pushback from employees like him on any "dangerously incorrect use of power." "Without someone speaking up once in a while it can get worse. So, this is my pushback," Achiam wrote, closing a thread where the initial post got about 570,000 views but later posts only attracted around 30,000. "The clear lesson from that was: if we want to be a trusted power in the world, we have to earn that trust, and we can burn it all up if we ever even seem to put the little guy in our crosshairs," Achiam wrote. Meanwhile, Musk seized the moment to fan the flames of social media criticism, reposting an X post from Helen Toner, a former member of OpenAI's board, who criticized OpenAI's "dishonesty and intimidation" tactics in sending the subpoenas. "OpenAI was built on a lie," Musk wrote in a post garnering almost 30 million views.
[2]
OpenAI allegedly sent police to an AI regulation advocate's door
Will OpenAI send police to your door if you advocate for AI regulation? Nathan Calvin, a lawyer who shapes policies surrounding the technology at Encode AI, claims OpenAI did just that. "One Tuesday night, as my wife and I sat down for dinner, a sheriff's deputy knocked on the door to serve me a subpoena from OpenAI," Calvin writes on X. In addition to subpoenaing the organization he works for, Calvin claims that OpenAI subpoenaed him personally, with the sheriff's deputy asking for his private messages with California legislators, college students, and former OpenAI employees. "I believe OpenAI used the pretext of their lawsuit against Elon Musk to intimidate their critics and imply that Elon is behind all of them," Calvin says. Last month, The San Francisco Standard reported that OpenAI had subpoenaed Encode AI to find out whether the group is funded by Elon Musk. OpenAI issued the subpoena as part of its countersuit against Elon Musk, which claims the billionaire has engaged in "bad-faith tactics to slow down OpenAI." OpenAI also subpoenaed Meta about its involvement with Musk's $97.4 billion takeover bid. Encode advocates for safety in AI and recently put together an open letter that presses OpenAI on how it plans to preserve its nonprofit mission amidst its corporate restructuring plans. The organization also pushed for SB 53, the landmark AI bill in California signed into law in September, which compels large AI companies to reveal information about their safety and security processes. "This is not normal. OpenAI used an unrelated lawsuit to intimidate advocates of a bill trying to regulate them. While the bill was still being debated," Calvin said, adding that he didn't turn over any of the documents requested. OpenAI's head of mission alignment, Joshua Achiam, responded to Calvin's post on X. "At what is possibly a risk to my whole career I will say: this doesn't seem great," Achiam wrote. "We can't be doing things that make us into a frightening power instead of a virtuous one. We have a duty to and a mission for all of humanity. The bar to pursue that duty is remarkably high." Tyler Johnston, the founder of the AI watchdog group The Midas Project, similarly reported that he and his organization received subpoenas from OpenAI. Johnston said OpenAI asked for "a list of every journalist, congressional office, partner organization, former employee, and member of the public" that the organization has spoken to about OpenAI's restructuring. The Verge reached out to OpenAI with a request for comment but didn't immediately hear back.
[3]
A 3-person policy non-profit that worked on California's AI safety law is publicly accusing OpenAI of intimidation tactics | Fortune
Nathan Calvin, the 29-year-old general counsel of Encode -- a small AI policy nonprofit with just three full-time employees -- published a viral thread on X Friday accusing OpenAI of using intimidation tactics to undermine California's SB 53, the California Transparency in Frontier Artificial Intelligence Act, while it was still being debated. He also alleged that OpenAI used its ongoing legal battle with Elon Musk as a pretext to target and intimidate critics, including Encode, which it implied was secretly funded by Musk. Calvin's thread quickly drew widespread attention, including from inside OpenAI itself. Justin Achaim, the company's head of mission alignment, weighed in on X with his own thread, written in a personal capacity, starting by saying "at what is possibly a risk to my whole career I will say: this doesn't seem great." Former OpenAI employees and prominent AI safety researchers also joined the conversation, many expressing concern over the company's alleged tactics. Helen Toner, the former OpenAI board member who resigned after a failed 2023 effort to oust CEO Sam Altman, wrote that some things the company does are great, but "the dishonesty & intimidation tactics in their policy work are really not." And at least one other nonprofit founder also weighed in: Tyler Johnston, founder of the AI watchdog group The Midas Project, responded to Calvin's thread with his own, saying "[I] got a knock at my door in Oklahoma with a demand for every text/email/document that, in the 'broadest sense permitted,' relates to OpenAI's governance and investors." As with Calvin, he added, he received the personal subpoena and The Midas Project was also served. "Had they just asked if I'm funded by Musk, I would have been happy to give them a simple 'man I wish' and call it a day," he wrote. "Instead, they asked for what was, practically speaking, a list of every journalist, congressional office, partner organization, former employee, and member of the public we'd spoken to about their restructuring." OpenAI did not respond to multiple requests for comment, but in a September article in the San Francisco Standard a lawyer for OpenAI said its actions were intended to shed light on whether its competitors were secretly bankrolling any of the organizations. "This is about transparency in terms of who funded these organizations," the lawyer said. As reported by the Standard , Calvin was served with a subpoena from OpenAI in August, delivered by a sheriff's deputy as he and his wife were sitting down to dinner. Encode, the organization he works for, was also served. The article reported that OpenAI appeared concerned that some of its most vocal critics were being funded by Elon Musk and other billionaire competitors -- and was targeting those nonprofit groups despite offering little evidence to support the claim. Calvin wrote Friday that Encode -- which he emphasized is not funded by Musk -- had criticized OpenAI's restructuring and worked on AI regulations, including SB 53. In the subpoena, OpenAI asked for all of Calvin's private communications on SB 53. "I believe OpenAI used the pretext of their lawsuit against Elon Musk to intimidate their critics and imply that Elon is behind all of them," he said, referring to the ongoing legal battle between OpenAI and Musk over the company's original nonprofit mission and governance. Encode had filed an amicus brief in the case supporting some of Musk's arguments. In a conversation with Fortune, Calvin emphasized that what has not been sufficiently covered is how inappropriate OpenAI's actions were in connection with SB 53, which was signed into law by Governor Gavin Newsom at the end of September. The law requires certain developers of "frontier" AI models to publish a public frontier AI framework and a transparency report when deploying or substantially modifying a model, report critical safety incidents to the state, and share assessments of catastrophic risks under the state's oversight Calvin alleges that OpenAI sought to weaken those requirements. In a letter to Governor Newsom's office while the bill was still under negotiation, which was shared on X in early September by a former AI policy researcher, the company urged California to treat companies as compliant with the state's rules if they had already signed a safety agreement with a U.S. federal agency or joined international frameworks such as the EU's AI Code of Practice. Calvin argues that such a provision could have significantly narrowed the law's reach -- potentially exempting OpenAI and other major AI developers from key safety and transparency requirements. "I didn't want to go into a ton of detail about it while SB 53 negotiations were still ongoing and we were trying to get it through," he said. "I didn't want it to become a story about Encode and OpenAI fighting, rather than about the merits of the bill, which I think are really important. So I wanted to wait until the bill was signed." He added that another reason he decided to speak out now was a recent LinkedIn post from Chris Lehane, OpenAI's head of global affairs, describing the company as having "worked to improve" SB 53 -- a characterization Calvin said felt deeply at odds with his experience over the past few months. Encode was founded by Sneha Revanur, who launched the organization in 2020 when she was 15 years old. "She is not a full time employee yet because she's still in college," said Sunny Gandhi, Encode's vice president of political affairs. "It's terrifying to have a half a trillion dollar company come after you," Gandhi said. Encode formally responded to OpenAI's subpoena, Calvin said, stating that it would not be turning over any documents because the organization is not funded by Elon Musk. "They have not said anything since," he added. Writing on X, OpenAI's Achaim publicly urged his company to engage more constructively with its critics. "Elon is certainly out to get us, and the man has got an extensive reach," he wrote. "But there is so much that is public that we can fight him on. And for something like SB 53, there are so many ways to engage productively." He added, "We can't be doing things that make us into a frightening power instead of a virtuous one. We have a duty and a mission to all of humanity, and the bar to pursue that duty is remarkably high." Calvin described the episode as the "most stressful period of my professional life." He added that he uses and gets value from OpenAI products and that the company conducts and publishes AI safety research that is "worthy of genuine praise." Many OpenAI employees, he said care a lot about OpenAI being a force for good in the world. "I want to see that side of OAI, but instead I see them trying to intimidate critics into silence," he wrote. "Does anyone believe these actions are consistent with OpenAI's nonprofit mission to ensure that AGI benefits humanity?"
[4]
OpenAI accused of using legal tactics to silence nonprofits
OpenAI CEO Sam Altman speaks during the Microsoft Build conference in 2024.Jason Redmond / AFP - Getty Images file OpenAI says it was founded with the goal of benefiting humanity. But several nonprofit organizations that say the artificial intelligence behemoth has strayed from its mission allege that it has recently used intimidation tactics to silence them. At least seven nonprofits that have been critical of OpenAI have received subpoenas in recent months, which they say are overly broad and appear to be a form of legal intimidation. All of the subpoenas are part of a legal battle between OpenAI and tech titan Elon Musk, with OpenAI suggesting that the subpoenaed nonprofits are somehow connected to Musk. The organizations that received subpoenas had signed or organized open letters and petitions critical of OpenAI's ongoing efforts to restructure from a nonprofit to a for-profit public benefit corporation. In one case, a subpoenaed nonprofit had also sponsored a California bill that imposed the first wide-ranging regulations on leading AI companies like OpenAI. Six of the nonprofits were not involved in the lawsuit between OpenAI and Musk before the tech company brought them into it by issuing the subpoenas, and the remaining nonprofit had filed a supporting brief in the case but says it had not engaged with Musk. Three of the subpoenas, issued to the San Francisco Foundation, Ekō and the Future of Life Institute, have not been previously reported. The nonprofits say the subpoenas seem designed to extract private information about OpenAI's critics despite most of the organizations having no relation to Musk or the ongoing lawsuit. Robert Weissman, co-president of Public Citizen, a nonprofit consumer advocacy organization that has been critical of OpenAI's restructuring plans but is uninvolved in the current lawsuit and has not received a subpoena, told NBC News that OpenAI's intent in issuing the subpoenas is clear. "This behavior is highly unusual. It's 100% intended to intimidate," he said. "This is the kind of tactic you would expect from the most cutthroat for-profit corporation," Weissman said. "It's an attempt to bully nonprofit critics, to chill speech and deter them from speaking out." The subpoenas, four of which were reviewed by NBC News, ask for a wide variety of documents and materials, including all information about the organizations' funders and donations, in addition to all communications regarding Musk, Meta and its founder Mark Zuckerberg. OpenAI had previously expressed suspicions about Meta and Zuckerberg's involvement in Musk's $97 billion bid to buy OpenAI in February. The subpoenas also ask for "all documents and communications concerning the governance or organizational structure of OpenAI." OpenAI said in September that its conversion to a more traditional for-profit company would "enable us to raise the capital required to accomplish our mission." The change is seen as a key step before the company could be publicly listed and would allow investors to hold valuable equity in the for-profit instead of just receiving a slice of profits. Critics of the conversion argue such a change could allow OpenAI, now the world's highest valued startup, to pursue profit over its charitable mission. The controversy around the subpoenas exploded online Friday, as employees of several of the nonprofits alleged on social media that the AI titan's subpoenas are hardball legal tactics that far surpass normal legal action and are often irrelevant to the ongoing Musk lawsuit. The allegations, which included photos of several of the subpoenas, led to several prominent current and former employees of OpenAI publicly criticizing its actions -- highly unusual for the tight-lipped company. "This doesn't seem great," OpenAI's Joshua Achiam, the company's head of mission alignment, said in an X post Friday afternoon. Achiam did not respond to NBC News' request for comment. Achiam reports to CEO Sam Altman and is tasked with ensuring OpenAI's pursuit of smarter-than-human AI systems to benefit all of humanity. "We have a duty to and a mission for all of humanity," Achiam wrote Friday. "There are things that can go wrong with power and sometimes people on the inside have to be willing to point it out loudly." The subpoenas and tactics at issue in Friday's statements stem from a protracted and heated legal battle involving Musk. Musk sued Altman, OpenAI and several co-founders last year accusing them of breaching OpenAI's contractual duties by embracing a for-profit drive nested within its nonprofit structure, in what Musk alleged has been "a textbook tale of altruism versus greed." Altman and OpenAI contend that Musk, who was one of OpenAI's early boosters and provided around $45 million to the company in its early years, is now jealous of the company's success and is employing "bad-faith tactics to slow down OpenAI and seize control of the leading AI innovations for his personal benefit." Musk is also the founder and CEO of OpenAI-rival xAI, which is quickly trying to catch up to other AI companies' capabilities. In response to a request for comment, an OpenAI spokesperson referred NBC News to posts on X from OpenAI's Chief Strategy Officer Jason Kwon. On Friday, Kwon wrote that after Musk sued OpenAI, several organizations "joined in and ran campaigns backing his opposition to OpenAI's restructure. This raised transparency questions about who was funding them and whether there was any coordination."The various nonprofits that received subpoenas span a range of causes, but have all been critical of OpenAI. The San Francisco Foundation (SFF), whose mission is to strengthen communities, build civic leadership and foster philanthropy in the San Francisco area, says it has never received any funding from Musk nor has it participated in the lawsuit with Musk. SFF helped lead a petition in January asking California's attorney general to prevent OpenAI's attempt to restructure from a nonprofit to a for-profit entity. Judith Bell, SFF's chief impact officer, said that SFF has criticized OpenAI because of her organization's focus on philanthropy. "OpenAI, an organization whose assets have been estimated in the hundreds of billions of dollars, is effectively one of the largest nonprofit entities in history. Its assets were accumulated for the explicit, charitable purpose of benefiting humanity," she said. "SFF opposes the diversion of these immense charitable assets for private, corporate profit. We believe the law requires that the full value of these public assets must be permanently dedicated to the public good, independent of any for-profit enterprise, regardless of OpenAI's corporate structure," she added. Sean Eskovitz, a litigator and former assistant United States attorney uninvolved in the Musk case and who has not spoken out for either party before, told NBC News that "the breadth of these subpoenas strike me as quite aggressive and quite broad." OpenAI "will have to demonstrate that the requests are relevant to an issue in the litigation," he said. "Even then, there would be concerns given the limited or non-involvement of these third parties in the litigation, the nature of the activities that are being inquired about and the public interest in the advocacy and speech issues here." "There would have to be a very close look at the scope of the subpoena in order to ensure that nonparties are not being harassed, that their speech is not being chilled, and that the proponent of the subpoena is not using the subpoena for some ulterior purpose," Eskovitz said. Ekō, an international nonprofit organization "committed to curbing the growing power of corporations," had initially shared its concerns about OpenAI's corporate structure with the company in early April before launching a public campaign critical of the organization. Emma Ruby-Sachs, Ekō's executive director, had heard that other advocacy organizations were receiving subpoenas over the past few months. "We knew this was a tactic that OpenAI was using to try and silence opposition," she told NBC News. OpenAI responded to the campaign in an email, noting that it had made attempts to meet with Ekō before the launch of the campaign and raising its suspicions that Musk might have been involved in Ekō's campaign. In reply, Ekō's campaign director dismissed OpenAI's concerns about its funding: "We are not in any way supported by or funded by Elon Musk and have a history of campaigning against him and his interests." Given their email correspondence and OpenAI's subsequent silence, Ruby-Sachs was taken aback by OpenAI's subpoena in early September. "We had written to them and said we're over 70% funded by small online donations from individuals, and we've run multiple campaigns against Elon Musk in the last year," she said. Ruby-Sachs mentioned that Ekō even ran a billboard ad in Times Square earlier this year depicting Musk as a king and advocating for him to be fired during his stint at DOGE earlier this year. Like the other nonprofits' subpoenas, Ekō's was very wide-ranging, asking for, among other details, "the identity of all Persons or entities that have contributed any funds to You and...the amount and date of any such contributions." Such breadth surprised many commentators. Helen Toner, a former member of OpenAI's board, skeptical of some of the company's practices, labeled the approach as "dishonesty & intimidation tactics." Given Ekō's public opposition to Musk, Ruby-Sachs dismissed the idea that Ekō could be funded by Musk to take down OpenAI. "The logical basis is so ridiculous that we have to assume this is just a tactic to scare us and get us to back off," she said. Ruby-Sachs said the organization prides itself on taking marching orders directly from its members, unlike a traditional policy or a lobbying group. Ekō determines its priorities in organization-wide votes in which hundreds of thousands of its members participate. "This subpoena shows OpenAI is going after people around the world who are legitimately concerned citizens and trying to shut them up," Ruby-Sachs said. "OpenAI is another company, just like every other company, trying to use their money and power to pursue profits, even if it screws over the people of California and potentially all of humanity," she said. Ekō's subpoena has not been widely reported before, nor has a similarly broad subpoena targeting the Future of Life Institute (FLI). Whereas Ekō says it has no ties to Musk, FLI received at least $10 million in funding from Musk. FLI last received funding from Musk in 2021. The funding focused on AI technical and policy research, including a grant-making program aimed at "keeping AI robust and beneficial." According to an FLI spokesperson, "Elon has no input into FLI's structural activities." The spokesperson said FLI distributed Musk's money to leading AI researchers, and a separate tech mogul gave FLI its permanent endowment as is made clear in its online funding reports. FLI received its subpoena at the beginning of October, while FLI President Max Tegmark received an individual subpoena in late August. "We assume the subpoena has to do with us generally calling for more oversight and transparency on the development of advanced AI and AI companies in general, which currently have zero regulation or meaningful oversight," FLI's spokesperson said. In a post on X addressing OpenAI, Tyler Johnston, who received a subpoena as founder of an AI-transparency advocacy group called The Midas Project, wrote: "We've never spoken with or taken funding from Musk and ilk, which we would have been happy to tell you if you asked a single time. In fact we've said he runs xAI so horridly it makes OpenAI 'saintly in comparison.'" Johnston's subpoena was previously reported in the SF Standard. On Monday, Johnston wrote on X that the subpoena and ensuing news coverage of his involvement in the lawsuit caused insurance brokers to refuse to cover his small watchdog organization. "If you wanted to constrain an org's speech, intimidation would be one strategy, but making them uninsurable is another, and maybe that's what's happened to us with this subpoena," he wrote. Former OpenAI research scientist Steven Adler told NBC News: "I'm surprised that OpenAI's Board would consider these actions consistent with its nonprofit legal obligations, or that they'd feel personally comfortable with this conduct." Legal Advocates for Safe Science and Technology (LASST) also received a subpoena from OpenAI. Tyler Whitmer, president and CEO of LASST and a lawyer himself, was unsettled by the subpoena's aggressive tactics, especially given his opposition to Musk. "I think Musk is a malign influence in the world right now," Whitmer said. "Part of my mission is to hold Musk's xAI to account in the same way I hold OpenAI to account. It's just that OpenAI is supposed to be better than this, while I don't expect the same from Elon," he said. "It's really clear that the subpoenas aren't narrowly tailored to the issues of the litigation and are instead trying to leverage the litigation to get information that OpenAI is not otherwise entitled to. And that's the best faith version of it," Whitmer said. Beyond its restructuring efforts, OpenAI also served subpoenas to at least one organization supporting recent efforts to regulate frontier AI companies. OpenAI's Head of Global Affairs Chris Lehane was publicly skeptical of California's SB 53, a newly signed bill mandating transparency into leading AI companies' risk-mitigation practices, before the bill was signed into law. Lehane is now attempting to shape America's AI politics to OpenAI's liking, bringing his dogged approach to Silicon Valley. Lehane recently helped launch a new $100 million Super PAC designed to fight against strict AI legislation. Encode, a nonprofit whose general counsel, Nathan Calvin, was recently subpoenaed by OpenAI, was a sponsor of the SB 53 legislation and also filed an amicus brief in Musk's lawsuit against OpenAI. In Friday's first volley, Calvin wrote: "Why did OpenAI subpoena me? Encode has criticized OpenAI's restructuring and worked on AI regulations, including SB 53." "I believe OpenAI used the pretext of their lawsuit against Elon Musk to intimidate their critics and imply that Elon is behind all of them." In response, OpenAI's Kwon wrote: "When a third party inserts themselves into active litigation, they are subject to standard legal processes. We issued a subpoena to ensure transparency around their involvement and funding. This is a routine step in litigation, not a separate legal action against Nathan or Encode." Ray Seilie, a litigator at Kinsella Holley Iser Kump Steinsapir in Los Angeles, told NBC News that the subpoenas could have been much more demanding if they were really meant as an intimidation tactic. Regarding Calvin and Encode, Seile said: "If OpenAI had wanted to intimidate or harass him, they could have served him with a deposition subpoena, which would have required Calvin to sit down for a full day of questioning under oath by OpenAI's lawyers in addition to providing documents." "The fact that OpenAI only asked for documents suggests that they were sincerely looking for connections between Musk and Encode, even if they turned out to be wrong about their suspicion." Former OpenAI employee, whistleblower and prominent AI policy researcher Daniel Kokotajlo said the pressure for critics to be silent can often be crushing, even in the absence of depositions or subpoenas. "I was super scared last year when I spoke up about OpenAI's secret nondisparagement clause, even though objectively speaking I was in the right."The clause forbade former employees from saying anything negative about the company or face losing their vested equity. OpenAI rescinded the policy shortly after Kokotajlo came forward. "When it's actually happening to you in real life, the psychological pressure to just stay quiet is pretty darn strong and most people cave to it," Kokotajlo told NBC News. "That's why intimidation tactics work."
[5]
OpenAI vs. Elon Musk: Lawsuit escalates with claims of intimidation against AI safety advocates
Musk lawsuit linked to subpoenas against small nonprofit Encode The legal battle between OpenAI and its co-founder Elon Musk has intensified dramatically following public accusations that the artificial intelligence powerhouse is employing aggressive intimidation tactics against advocates for government oversight, allegedly using the Musk lawsuit as a tool to silence critics. The core of the controversy involves a tiny, three-person policy non-profit named Encode, which played a pivotal role in pushing for the passage of California's Transparency in Frontier Artificial Intelligence Act (SB 53). This landmark law, signed recently by Governor Gavin Newsom, represents a significant step in U.S. state-level AI regulation, mandating that developers of powerful frontier models publish safety frameworks, report major incidents, and share risk assessments with state authorities. OpenAI, despite publicly touting its commitment to safety, had reportedly lobbied to weaken the bill during negotiations. Also read: Study finds hacking most LLMs is remarkably easy: It just takes malicious documents The claims of intimidation were brought to light by Nathan Calvin, the 29-year-old general counsel for Encode. Calvin published a viral thread on X (formerly Twitter) in which he detailed how OpenAI allegedly attempted to exert legal pressure on the small organization and its allies to undermine their advocacy work on SB 53. Calvin specifically accuses OpenAI of leveraging its ongoing, high-profile litigation with Elon Musk to issue exceptionally broad and burdensome subpoenas to critics. The non-profit's counsel described the shocking moment a sheriff's deputy arrived at his home to serve the legal demand in August. The subpoena demanded that Calvin and Encode turn over a vast quantity of their private communications and records related to OpenAI's internal governance, investors, and policy work, including private messages concerning their efforts on SB 53. According to Calvin, the intent of the legal action was not discovery, but pure intimidation. OpenAI, with its multi-billion dollar valuation and virtually limitless legal resources, was allegedly trying to overwhelm a small non-profit that operates on a minuscule fraction of its budget. By linking the subpoenas to the Musk lawsuit, OpenAI implicitly suggested that Encode and other vocal critics were secretly funded by the rival tech billionaire, a claim that Encode has vehemently denied. This narrative, the critics argue, serves to delegitimize policy-focused advocacy groups as proxies for a corporate rival, rather than independent voices concerned with public safety. The decision to deploy such aggressive legal tactics against a public-interest non-profit immediately drew criticism from across the AI industry, including a rare and notable wave of internal dissent from current and former OpenAI personnel. Joshua Achiam, OpenAI's Head of Mission Alignment, posted a remarkably candid public statement expressing his discomfort with the company's actions. He noted that the use of such tactics against constructive critics was "not great" and stated that the company cannot be seen as a "frightening power instead of a virtuous one." He urged his colleagues to "engage more constructively" with critics, reinforcing the idea that the company has a "duty and a mission to all of humanity." Also read: After NVIDIA, OpenAI chooses AMD chips for ChatGPT's future: But why? Adding to the outcry was Helen Toner, a former OpenAI board member who was involved in the 2023 drama surrounding CEO Sam Altman's brief ousting. Toner was blunt, writing that while OpenAI does good research, "the dishonesty & intimidation tactics in their policy work are really not." This public pressure from insiders, who are often bound by strict non-disclosure and off-boarding agreements, underscores the deep philosophical rift within OpenAI between its profit-driven imperatives and its founding non-profit mission. The intimidation claims provide a stark real-world example of the concerns at the heart of the legal case filed by Elon Musk. Musk's lawsuit alleges that OpenAI, which he co-founded in 2015, has fundamentally betrayed its charter as a non-profit dedicated to open-source, non-commercial development of Artificial General Intelligence (AGI) for the benefit of humanity. Musk's public statement that OpenAI is "built on a lie" is amplified by these new revelations. The critics argue that a company prioritizing its original mission of safe AGI deployment would welcome, or at least constructively engage with, a law like SB 53 that requires basic transparency and safety reporting. Instead, OpenAI's alleged strategy was two-fold: attempt to dilute the bill's requirements through lobbying, and then use legal force to silence the very people advocating for those requirements. This behavior, opponents argue, is consistent with a company that has prioritized rapid commercial deployment and profit maximization over its core safety mission. Despite the legal pressure, Encode and its allies stood firm, refusing to hand over the demanded documents and continuing their advocacy. California's SB 53, requiring developers to report on safety measures and risks, was ultimately signed into law in September. However, the use of targeted subpoenas against small civil society groups sets a dangerous precedent, threatening to chill public interest advocacy and allowing powerful tech companies to overwhelm critics with expensive, broad-ranging legal demands during critical legislative debates. The escalation transforms the Musk v. OpenAI dispute from a corporate governance fight into a battle with significant implications for the future of democratic regulation and AI safety oversight.
Share
Share
Copy Link
OpenAI faces backlash for issuing subpoenas to nonprofit organizations critical of its restructuring, allegedly using its lawsuit with Elon Musk as a pretext. Critics claim these actions are attempts to silence opposition and undermine AI safety advocacy efforts.

OpenAI, the artificial intelligence powerhouse, has found itself embroiled in controversy following accusations of using intimidation tactics against AI safety advocates and nonprofit organizations. The company has reportedly issued subpoenas to at least seven nonprofits that have been critical of its ongoing efforts to restructure from a nonprofit to a for-profit public benefit corporation
1
4
.These subpoenas are part of OpenAI's legal battle with co-founder Elon Musk, who sued the company alleging a breach of its original nonprofit mission. OpenAI claims the subpoenas are intended to investigate whether these organizations are secretly funded by Musk or other competitors
3
4
.The controversy has particularly affected Encode, a small three-person nonprofit that played a crucial role in advocating for California's Transparency in Frontier Artificial Intelligence Act (SB 53). Nathan Calvin, Encode's 29-year-old general counsel, revealed that he was personally served with a subpoena by a sheriff's deputy at his home
3
5
.Calvin and other nonprofit leaders argue that OpenAI's actions are not about discovery but pure intimidation. They claim the company is using its vast resources to overwhelm smaller organizations that operate on much smaller budgets
3
5
.The subpoenas, as described by those who received them, are exceptionally broad in scope. They demand a wide variety of documents and materials, including:
4
The controversy has sparked rare public criticism from within OpenAI itself. Joshua Achiam, the company's Head of Mission Alignment, expressed discomfort with the company's actions, stating that OpenAI cannot be seen as a "frightening power instead of a virtuous one"
3
5
.Helen Toner, a former OpenAI board member, also weighed in, criticizing the company's "dishonesty & intimidation tactics in their policy work"
3
5
.Related Stories
This situation highlights the growing tension between OpenAI's profit-driven imperatives and its founding nonprofit mission. Critics argue that a company truly committed to safe AI development would welcome transparency laws like SB 53, rather than attempting to weaken them through lobbying and silencing advocates
5
.The controversy also underscores the challenges in regulating rapidly advancing AI technologies. As companies like OpenAI grow more powerful, there are increasing concerns about their influence on policy-making and their approach to critics and safety advocates
2
4
.The allegations against OpenAI have sent ripples through the AI industry. Other nonprofits, such as The Midas Project and Ekō, have also reported receiving similar subpoenas
1
2
. This has raised concerns about the potential chilling effect on AI safety advocacy and the ability of smaller organizations to participate in crucial policy discussions4
5
.As the legal battle between OpenAI and Elon Musk continues to unfold, the AI community watches closely. The outcome of this controversy could have significant implications for the future of AI governance, the role of nonprofits in tech policy, and the balance between innovation and safety in the rapidly evolving field of artificial intelligence.
Summarized by
Navi
05 Apr 2025•Policy and Regulation

14 Dec 2024•Business and Economy

14 Dec 2024•Business and Economy
