Curated by THEOUTPOST
On Wed, 23 Apr, 4:04 PM UTC
14 Sources
[1]
AI leaders urge CA, DE AGs to stop OpenAI for-profit shift
Elon's not the only one sounding the alarm over the AI giant's cash grab A group of AI heavyweights and ex-OpenAI staffers are urging the attorneys general of California and Delaware to block the ChatGPT shop's latest restructuring into a for-profit corporation. Nobel laureate and neural-network maven Geoffrey Hinton; UC Berkeley's Stuart Russell; Hugging Face's Margaret Mitchell; and 10 former OpenAI employees are among the signatories on an open letter asking the AGs to intervene before OpenAI's nonprofit shell gives up the wheel entirely. Essentially, they're arguing that the development of artificial general intelligence (AGI) is too important to leave up to a private company with a profit motive, and must be preserved for the public interest instead. At issue is the plan to convert OpenAI's for-profit arm into a Delaware public benefit corporation (PBC) and give it full control over the company's operations and business, leaving the non-profit portion to manage charitable initiatives in certain sectors like healthcare, education, and science. The signatories argue this would dismantle the governance safeguards that once set OpenAI apart, like capped investor returns, an independent board, and a fiduciary duty to prioritize the public interest over financial gain. "OpenAI has a bespoke legal structure based on nonprofit control," the letter states. "This structure is designed to harness market forces without allowing those forces to overwhelm its charitable purpose." While the group echoes some of the concerns raised by departed co-founder Elon Musk - who is currently suing OpenAI to block the restructure - they're playing a different card: A formal appeal to regulators to protect the public. "As the primary regulators of OpenAI, you currently have the power to protect OpenAI's charitable purpose on behalf of its beneficiaries, safeguarding the public interest at a potentially pivotal moment in the development of this technology," the letter pleads to California AG Rob Bonta and/or Delaware AG Kathleen Jennings. "Under OpenAI's proposed restructuring, that would no longer be the case." We urge you to protect OpenAI's charitable purpose by preserving the core governance safeguards that OpenAI and Mr. Altman have repeatedly claimed are important to its mission "Do not allow the restructuring to proceed as planned," the authors demanded, point blank. "We urge you to protect OpenAI's charitable purpose by preserving the core governance safeguards that OpenAI and Mr. Altman have repeatedly claimed are important to its mission." The arguments in the letter will sound familiar to anyone who's followed the saga of the company, which Sam Altman and a coterie of other luminaries, including Musk, founded in 2015. At launch, OpenAI pitched itself as a nonprofit org against the likes of Google DeepMind, committed to developing AGI for the benefit of all humanity. That changed in 2019, when it created a for-profit subsidiary OpenAI LP under the full control of its nonprofit parent. The new structure was pitched as a "capped-profit" model designed to allow investors a limited return - anything above that cap reverting to the nonprofit. The board was supposed to remain majority-independent, and any AGI developed would be owned and governed by the nonprofit. The second restructuring proposed last year would take it a step further: Converting OpenAI LP into a Delaware PBC. This aims to help OpenAI raise funds and compete with its for-profit rivals - something that hasn't really been a problem for the firm of late. The signatories urge the AGs to demand answers from OpenAI, arguing that the outfit hasn't publicly addressed key questions about its restructuring, such as whether alternative models were considered, what role investor pressure played, and whether any board members involved in the process stand to benefit personally. They also ask the AGs to remove any directors who compromised the integrity of the nonprofit board's decisions and ask for new oversight mechanisms to ensure the board remains independent and focused on OpenAI's founding mission, not on maximizing for-profit returns. We haven't heard from the Delaware AG's office, and the California AG told us that it can't comment on the matter due to an ongoing investigation. Bonta's office did give us a clue as to what it thinks of the letter, though. "The California Department of Justice is committed to protecting charitable assets for their intended purpose and takes this responsibility seriously," the California AG's office told us. OpenAI responded to The Register with a similar explanation that it's given us before, claiming the nonprofit isn't being sidelined and, if anything, is actually being made stronger by the move. Our board has been very clear: our nonprofit will be strengthened and any changes to our existing structure would be in service of ensuring the broader public can benefit from AI "Our board has been very clear: our nonprofit will be strengthened and any changes to our existing structure would be in service of ensuring the broader public can benefit from AI," an OpenAI spokesperson told us in an emailed statement. The for-profit arm being turned into a PBC would be similar to other AI outfits like Anthropic and xAI, the company added, while also getting in a dig at Musk by saying xAI "do not support a nonprofit." OpenAI also pointed out, again, that it had recently established a commission of experts to guide its "philanthropic efforts" by "maximizing impact for people and mission-driven organizations." None of what OpenAI has said directly addresses its own claims that, under the latest proposed restructuring, the for-profit arm would take over day-to-day operations. As a Delaware public benefit corporation, OpenAI would still be legally required to pursue a public benefit alongside profit. However, under Delaware law, the OpenAI PBC wouldn't be required to publicly disclose progress toward that goal. It also gives PBCs broad leeway to engage in profitable ventures that don't necessarily align with its declared public mission. Given that Softbank recently promised to plow $40 billion into the AI behemoth, with a specific condition that OpenAI convert into a for-profit entity this year, we can imagine a strong incentive to engage in as many of these profit-making ventures as possible. ®
[2]
Ex-OpenAI staff and top AI experts seek to block proposed for-profit restructure
Former OpenAI employees and leading artificial intelligence experts are joining forces to oppose the ChatGPT maker's transition to a for-profit company over concerns about the dangers of the advancing technology. Leading academics Geoffrey Hinton, Margaret Mitchell and Stuart Russell, as well as 10 former OpenAI staffers, are the latest figures to urge US authorities to block the start-up's proposed switch from a non-profit structure to a public benefit corporation (PBC). In a joint letter submitted to the California and Delaware attorneys-general on Tuesday night, the group opposed the move while echoing the concerns of Elon Musk, who is fighting a legal battle in an effort to stop the conversion. The signatories argue that the proposed restructuring would transfer control of the development of artificial general intelligence (AGI) -- computer systems with equal or superior cognitive abilities to most humans -- to a company driven by profits. They added that this contradicts OpenAI's founding mission of ensuring that AGI benefits all of humanity rather than "the private gain of any person". "I would like [OpenAI] to execute that mission instead of enriching their investors," said Hinton, a Nobel laureate and professor at the University of Toronto. The intervention brings further attention to OpenAI's controversial plan to transform into a for-profit entity by the end of this year. Without the switch, the San Francisco-based company, led by chief executive Sam Altman, risks forfeiting some of its recent $30bn investment from SoftBank, as well as additional contributions from investors in previous rounds. OpenAI, which is now valued at $300bn, has argued that at a time when rivals such as Google and Meta are investing hundreds of billions of dollars to develop the technology, its investors "need conventional equity and less structural bespokeness" to commit further capital. But Page Hedley, one of the former OpenAI employees to have signed the letter opposing its conversion, said: "Competing is not OpenAI's mission. By all accounts, it has already been extraordinarily successful at raising money." He added: "Under the PBC, the board would not have a fiduciary duty to the beneficiaries of the mission, the public . . . and if the board does not fulfil its fiduciary duty to shareholders, they would have recourse." OpenAI has a complex financial structure. In 2019, it created a for-profit subsidiary, which capped returns for investors and gave its non-profit board full control over the for-profit arm. Under the proposed changes, the non-profit board would have a stake in the for-profit entity and certain voting rights, but it would have a much more traditional investor structure. "Our non-profit will be strengthened, and any changes to our existing structure would be in service of ensuring the broader public can benefit from AI," OpenAI said in a statement. "This structure will continue to ensure that as the for-profit succeeds and grows, so too does the non-profit, enabling us to achieve the mission," it added. The attorneys-general of California and Delaware, who have authority over the decision as OpenAI is based and incorporated in their states, have a responsibility to ensure the conversion is in the public interest and at a fair value. The letter addressed to them argues that OpenAI's structure was designed to balance market forces with its mission, and the proposed changes would abandon those measures for financial pressures. OpenAI has previously warned that AGI would create "serious risk of misuse, drastic accidents and societal disruption". Altman signed a statement in May 2023, which cautioned that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war". "If OpenAI cedes control over its for-profit subsidiary, that would not only be a clear violation of its stated charitable purpose . . . but it would also greatly increase the odds of what its own CEO has described as 'lights out for all of us'," said Russell, professor of computer science at the University of California, Berkeley.
[3]
Ex-OpenAI workers ask California and Delaware AGs to block for-profit conversion of ChatGPT maker
Former employees of OpenAI are asking the top law enforcement officers in California and Delaware to stop the company from shifting control of its artificial intelligence technology from a nonprofit charity to a for-profit business. They're concerned about what happens if the ChatGPT maker fulfills its ambition to build AI that outperforms humans, but is no longer accountable to its public mission to safeguard that technology from causing grievous harms. "Ultimately, I'm worried about who owns and controls this technology once it's created," said Page Hedley, a former policy and ethics adviser at OpenAI, in an interview with The Associated Press. Backed by three Nobel Prize winners and other advocates and experts, Hedley and nine other ex-OpenAI workers sent a letter this week to the two state attorneys general. The coalition is asking California Attorney General Rob Bonta and Delaware Attorney General Kathy Jennings, both Democrats, to use their authority to protect OpenAI's charitable purpose and block its planned restructuring. OpenAI is incorporated in Delaware and operates out of San Francisco. OpenAI said in response that "any changes to our existing structure would be in service of ensuring the broader public can benefit from AI." It said its for-profit will be a public benefit corporation, similar to other AI labs like Anthropic and tech billionaire Elon Musk's xAI, except that OpenAI will still preserve a nonprofit arm. "This structure will continue to ensure that as the for-profit succeeds and grows, so too does the nonprofit, enabling us to achieve the mission," the company said in a statement. The letter is the second petition to state officials this month. The last came from a group of labor leaders and nonprofits focused on protecting OpenAI's billions of dollars of charitable assets. Jennings said last fall she would "review any such transaction to ensure that the public's interests are adequately protected." Bonta's office sought more information from OpenAI late last year but has said it can't comment, even to confirm or deny if it is investigating. OpenAI's co-founders, including current CEO Sam Altman and Musk, originally started it as a nonprofit research laboratory on a mission to safely build what's known as artificial general intelligence, or AGI, for humanity's benefit. Nearly a decade later, OpenAI has reported its market value as $300 billion and counts 400 million weekly users of ChatGPT, its flagship product. OpenAI already has a for-profit subsidiary but faces a number of challenges in converting its core governance structure. One is a lawsuit from Musk, who accuses the company and Altman of betraying the founding principles that led the Tesla CEO to invest in the charity. While some of the signatories of this week's letter support Musk's lawsuit, Hedley said others are "understandably cynical" because Musk also runs his own rival AI company. The signatories include two Nobel-winning economists, Oliver Hart and Joseph Stiglitz, as well as AI pioneers and computer scientists Geoffrey Hinton, who won last year's Nobel Prize in physics, and Stuart Russell. "I like OpenAI's mission to 'ensure that artificial general intelligence benefits all of humanity,' and I would like them to execute that mission instead of enriching their investors," Hinton said in a statement Wednesday. "I'm happy there is an effort to hold OpenAI to its mission that does not involve Elon Musk." Conflicts over OpenAI's purpose have long simmered at the San Francisco institute, contributing to Musk quitting in 2018, Altman's short-lived ouster in 2023 and other high-profile departures. Hedley, a lawyer by training, worked for OpenAI in 2017 and 2018, a time when the nonprofit was still navigating the best ways to steward the technology it wanted to build. As recently as 2023, Altman said advanced AI held promise but also warned of extraordinary risks, from drastic accidents to societal disruptions. In recent years, however, Hedley said he watched with concern as OpenAI, buoyed by the success of ChatGPT, was increasingly cutting corners on safety testing and rushing out new products to get ahead of business competitors. "The costs of those decisions will continue to go up as the technology becomes more powerful," he said. "I think that in the new structure that OpenAI wants, the incentives to rush to make those decisions will go up and there will no longer be anybody really who can tell them not to, tell them this is not OK." Software engineer Anish Tondwalkar, a former member of OpenAI's technical team until last year, said an important assurance in OpenAI's nonprofit charter is a "stop-and-assist clause" that directs OpenAI to stand down and help if another organization is nearing the achievement of better-than-human AI. "If OpenAI is allowed to become a for-profit, these safeguards, and OpenAI's duty to the public can vanish overnight," Tondwalkar said in a statement Wednesday. Another former worker who signed the letter puts it more bluntly. "OpenAI may one day build technology that could get us all killed," said Nisan Stiennon, an AI engineer who worked at OpenAI from 2018 to 2020. "It is to OpenAI's credit that it's controlled by a nonprofit with a duty to humanity. This duty precludes giving up that control." ___ The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP's text archives.
[4]
Ex-OpenAI staffers urge states not to approve ChatGPT maker's restructuring effort
Sam Altman, chief executive officer of OpenAI, during a fireside chat organized by Softbank Ventures Asia in Seoul, South Korea, on Friday, June 9, 2023. A group of ex-OpenAI employees, Nobel laureates, law professors and civil society organizations sent a letter last week to attorneys general in California and Delaware requesting that they halt the startup's restructuring efforts out of safety concerns. In the letter, which was delivered to OpenAI's board on Tuesday, the group wrote that restructuring to a for-profit entity would "subvert OpenAI's charitable purpose," and "remove nonprofit control and eliminate critical governance safeguards." "No sale price can compensate for loss of control," the group wrote. OpenAI, which was created as a nonprofit artificial intelligence research lab in 2015, has been commercializing products in recent years, most notably its viral ChatGPT chatbot. The company is still overseen by a nonprofit parent but announced last year that it would convert into a for-profit company, wresting control from the nonprofit but keeping it as a separate arm. The change, which requires the approval of principal backer Microsoft and the California attorney general, would remove some potential restraints as the company takes on competitors including Microsoft, Google, Amazon and Elon Musk's xAI. The effort sparked controversy among OpenAI employees and AI leaders, as it seemed to contradict the company's mission and founding principles. Multiple company executives have since left and started their own AI companies. "OpenAI may one day build technology that could get us all killed," Nisan Stiennon, who worked at OpenAI from 2018 to 2020, said in a statement. "It is to OpenAI's credit that it's controlled by a nonprofit with a duty to humanity. This duty precludes giving up that control." Jacob Hilton, who worked at OpenAI from 2018 to 2023, said that throughout his time at the company, "leadership repeatedly emphasized that OpenAI's primary fiduciary duty is to humanity." "They claimed this was a legally binding promise, enshrined in the company's Charter and enforced through its corporate structure," Hilton said in a statement. "Now they are proposing to abandon that foundational pledge." OpenAI didn't immediately respond to a request for comment.
[5]
A growing coalition joins Elon Musk's war on OpenAI
Legal scholars and former OpenAI employees asked state officials to block the ChatGPT developer's plan to restructure in pursuit of profits. SAN FRANCISCO -- A plan by ChatGPT maker OpenAI to shed its nonprofit status to more easily make money from artificial intelligence faces growing opposition from a varied coalition that includes consumer advocates, labor groups, law professors -- and billionaire Elon Musk. The public and legal pressure could reshape the race to develop AI technology and determine the fate of one of the most valuable tech companies to emerge in the past decade. A group of legal scholars, AI researchers and tech industry watchdogs publicly joined the chorus opposing OpenAI's plans to restructure late Tuesday, sending its board a copy of a letter sent privately last week to the attorneys general of California and Delaware. The letter asked the state officials to block OpenAI's move, arguing it would remove a powerful technology from the oversight of its nonprofit board that is tasked with benefiting humanity, not chasing profits. Earlier this month, California Attorney General Rob Bonta received a request to halt and investigate OpenAI's transition plan from a group of more than 50 nonprofits including faith, labor and community groups. California's attorney general must sign-off on any plan to turn a nonprofit into a for-profit company in the state if it has significant assets. Musk's attempt to stop OpenAI restructuring is playing out in federal court after the billionaire, a co-founder of the company who later split with the AI developer, filed suit last year. A group of 12 former OpenAI employees filed a proposed amicus brief in support of Musk this month. The case is expected to go to trial in coming months. The nonprofits and individuals asking state officials to intervene are not formally allied with Musk, who has publicly sparred with OpenAI CEO Sam Altman and founded a rival AI developer called xAI. However, their arguments have similarities to some made by Musk in court filings, in which he has said OpenAI should not be permitted to abandon its founding mission to benefit humanity. Giving a for-profit company full control of OpenAI's technology could incentivize it to prioritize profits over making sure its technology is safe, said Tyler Whitmer, the head of Legal Advocates for Safe Science and Technology, and an organizer of the letter sent to the company's board on Tuesday. "There are a lot of people who are really concerned about this restructuring who have no love for Musk and certainly don't see themselves as allied with him," Whitmer said. "Taking this directly to the attorneys general pulls it out of the context of that court case and puts it in front of people who currently have enforcement power over charities like OpenAI." Other signatories of the letter include 10 former OpenAI employees and experts in corporate governance from prominent universities, including Harvard and Columbia. Jason Deutrom, a spokesperson for OpenAI, said the company is not abandoning its founding purpose. "Our Board has been very clear: our nonprofit will be strengthened and any changes to our existing structure would be in service of ensuring the broader public can benefit from AI," he said. "This structure will continue to ensure that as the for-profit succeeds and grows, so too does the nonprofit, enabling us to achieve the mission." The Washington Post has a content partnership with OpenAI. OpenAI was founded as a nonprofit organization in 2015 with a charter that says its mission is to develop AI capable of human tasks and ensure that the technology benefits humanity as a whole. It is now among the handful of companies at the forefront of AI and says that to win the investment needed to keep up with rivals such as Google and Meta, it must split off development of OpenAI's technology from the original nonprofit and the oversight of its board. The restructuring plan would see OpenAI transfer some of its significant resources to the newly independent nonprofit, which executives say will still seek to use AI to benefit humanity. "Our plan would result in one of the best resourced non-profits in history," OpenAI said in a December blog post outlining the restructuring plan. Opponents say control over OpenAI's tech is so valuable that splitting it from the nonprofit amounts to ripping away potentially billions of dollars of value from an organization pledged to serve the public. Letting OpenAI execute its restructuring plan would set a dangerous precedent for other nonprofits, said Michael Dorff, a corporate law professor at UCLA who signed the letter sent to OpenAI's board on Tuesday. "If those rules break under the influence of a sufficiently large profit incentive, the resulting precedent will not be limited to OpenAI," he said. "Donors will no longer be able to trust that their gifts to charities will be used for their intended purposes, and the entire system of charitable giving may fall apart." In response to a request for comment from The Post, an unsigned email statement from the California attorney general's office said the state's Department of Justice is committed to protecting charitable assets for their intended purpose. OpenAI's leaders have struggled with its structure since its founding, emails released in the course of Musk's lawsuit last year show. Musk, initially the company's primary financial backer, and his co-founders squabbled over control of the project and how to make it sustainable. The billionaire left OpenAI in 2018 after Altman and other leaders rejected his suggestion that he take over the company. Soon after, OpenAI took on outside investment to fund the high costs of AI development, which requires access to powerful computer chips. Microsoft and other investors piled more money into OpenAI after the 2022 launch of ChatGPT sparked an intense AI race across the tech industry. AI development is now more expensive than ever. Microsoft plans to spend $80 billion this year on new AI data centers, and Google has said most of its $75 billion in capital expenditures this year will also be spent on AI infrastructure. OpenAI has said it needs more investment to keep up with spending by AI rivals and last month raised $40 billion from investors including Japanese conglomerate SoftBank. Most of that money will only be provided if the company sheds its nonprofit status by the end of the year. Jacob Hilton, a former OpenAI worker who left the company in 2023 and signed the recent letter to state attorneys general, said meeting that condition would see OpenAI abandon its roots. "The proposed restructuring would contradict what I was told by company leadership throughout my four and a half years at the company: that OpenAI's primary fiduciary duty is to humanity," Hilton said. "They are proposing to abandon that foundational pledge.
[6]
Ex-OpenAI workers ask California and Delaware AGs to block for-profit conversion of ChatGPT maker
Former employees of OpenAI are asking the top law enforcement officers in California and Delaware to stop the company from shifting control of its artificial intelligence technology from a nonprofit charity to a for-profit business. They're concerned about what happens if the ChatGPT maker fulfills its ambition to build AI that outperforms humans, but is no longer accountable to its public mission to safeguard that technology from causing grievous harms. "Ultimately, I'm worried about who owns and controls this technology once it's created," said Page Hedley, a former policy and ethics adviser at OpenAI, in an interview with The Associated Press. Backed by three Nobel Prize winners and other advocates and experts, Hedley and nine other ex-OpenAI workers sent a letter this week to the two state attorneys general. The coalition is asking California Attorney General Rob Bonta and Delaware Attorney General Kathy Jennings, both Democrats, to use their authority to protect OpenAI's charitable purpose and block its planned restructuring. OpenAI is incorporated in Delaware and operates out of San Francisco. OpenAI said in response that "any changes to our existing structure would be in service of ensuring the broader public can benefit from AI." It said its for-profit will be a public benefit corporation, similar to other AI labs like Anthropic and tech billionaire Elon Musk's xAI, except that OpenAI will still preserve a nonprofit arm. "This structure will continue to ensure that as the for-profit succeeds and grows, so too does the nonprofit, enabling us to achieve the mission," the company said in a statement. The letter is the second petition to state officials this month. The last came from a group of labor leaders and nonprofits focused on protecting OpenAI's billions of dollars of charitable assets. Jennings said last fall she would "review any such transaction to ensure that the public's interests are adequately protected." Bonta's office sought more information from OpenAI late last year but has said it can't comment, even to confirm or deny if it is investigating. OpenAI's co-founders, including current CEO Sam Altman and Musk, originally started it as a nonprofit research laboratory on a mission to safely build what's known as artificial general intelligence, or AGI, for humanity's benefit. Nearly a decade later, OpenAI has reported its market value as $300 billion and counts 400 million weekly users of ChatGPT, its flagship product. OpenAI already has a for-profit subsidiary but faces a number of challenges in converting its core governance structure. One is a lawsuit from Musk, who accuses the company and Altman of betraying the founding principles that led the Tesla CEO to invest in the charity. While some of the signatories of this week's letter support Musk's lawsuit, Hedley said others are "understandably cynical" because Musk also runs his own rival AI company. The signatories include two Nobel-winning economists, Oliver Hart and Joseph Stiglitz, as well as AI pioneers and computer scientists Geoffrey Hinton, who won last year's Nobel Prize in physics, and Stuart Russell. "I like OpenAI's mission to 'ensure that artificial general intelligence benefits all of humanity,' and I would like them to execute that mission instead of enriching their investors," Hinton said in a statement Wednesday. "I'm happy there is an effort to hold OpenAI to its mission that does not involve Elon Musk." Conflicts over OpenAI's purpose have long simmered at the San Francisco institute, contributing to Musk quitting in 2018, Altman's short-lived ouster in 2023 and other high-profile departures. Hedley, a lawyer by training, worked for OpenAI in 2017 and 2018, a time when the nonprofit was still navigating the best ways to steward the technology it wanted to build. As recently as 2023, Altman said advanced AI held promise but also warned of extraordinary risks, from drastic accidents to societal disruptions. In recent years, however, Hedley said he watched with concern as OpenAI, buoyed by the success of ChatGPT, was increasingly cutting corners on safety testing and rushing out new products to get ahead of business competitors. "The costs of those decisions will continue to go up as the technology becomes more powerful," he said. "I think that in the new structure that OpenAI wants, the incentives to rush to make those decisions will go up and there will no longer be anybody really who can tell them not to, tell them this is not OK." Software engineer Anish Tondwalkar, a former member of OpenAI's technical team until last year, said an important assurance in OpenAI's nonprofit charter is a "stop-and-assist clause" that directs OpenAI to stand down and help if another organization is nearing the achievement of better-than-human AI. "If OpenAI is allowed to become a for-profit, these safeguards, and OpenAI's duty to the public can vanish overnight," Tondwalkar said in a statement Wednesday. Another former worker who signed the letter puts it more bluntly. "OpenAI may one day build technology that could get us all killed," said Nisan Stiennon, an AI engineer who worked at OpenAI from 2018 to 2020. "It is to OpenAI's credit that it's controlled by a nonprofit with a duty to humanity. This duty precludes giving up that control." © 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.
[7]
Ex-OpenAI workers ask state AGs to block for-profit conversion
Former employees of OpenAI are asking the top law enforcement officers in California and Delaware to stop the company from shifting control of its artificial intelligence technology from a nonprofit charity to a for-profit business. They're concerned about what happens if the ChatGPT maker fulfills its ambition to build AI that outperforms humans, but is no longer accountable to its public mission to safeguard that technology from causing grievous harms. "Ultimately, I'm worried about who owns and controls this technology once it's created," said Page Hedley, a former policy and ethics adviser at OpenAI, in an interview with the Associated Press. Backed by three Nobel Prize winners and other advocates and experts, Hedley and nine other ex-OpenAI workers sent a letter this week to the two state attorneys general. The coalition is asking California Attorney General Rob Bonta and Delaware Attorney General Kathy Jennings, both Democrats, to use their authority to protect OpenAI's charitable purpose and block its planned restructuring. OpenAI is incorporated in Delaware and operates out of San Francisco. OpenAI said in response that "any changes to our existing structure would be in service of ensuring the broader public can benefit from AI." It said its for-profit will be a public benefit corporation, similar to other AI labs like Anthropic and tech billionaire Elon Musk's xAI, except that OpenAI will still preserve a nonprofit arm. "This structure will continue to ensure that as the for-profit succeeds and grows, so too does the nonprofit, enabling us to achieve the mission," the company said in a statement. The letter is the second petition to state officials this month. The last came from a group of labor leaders and nonprofits focused on protecting OpenAI's billions of dollars of charitable assets. Jennings said last fall she would "review any such transaction to ensure that the public's interests are adequately protected." Bonta's office sought more information from OpenAI late last year but has said it can't comment, even to confirm or deny if it is investigating. OpenAI's cofounders, including current CEO Sam Altman and Musk, originally started it as a nonprofit research laboratory on a mission to safely build what's known as artificial general intelligence, or AGI, for humanity's benefit. Nearly a decade later, OpenAI has reported its market value as $300 billion and counts 400 million weekly users of ChatGPT, its flagship product. OpenAI already has a for-profit subsidiary but faces a number of challenges in converting its core governance structure. One is a lawsuit from Musk, who accuses the company and Altman of betraying the founding principles that led the Tesla CEO to invest in the charity. While some of the signatories of this week's letter support Musk's lawsuit, Hedley said others are "understandably cynical" because Musk also runs his own rival AI company. The signatories include two Nobel-winning economists, Oliver Hart and Joseph Stiglitz, as well as AI pioneers and computer scientists Geoffrey Hinton, who won last year's Nobel Prize in physics, and Stuart Russell. "I like OpenAI's mission to 'ensure that artificial general intelligence benefits all of humanity,' and I would like them to execute that mission instead of enriching their investors," Hinton said in a statement Wednesday. "I'm happy there is an effort to hold OpenAI to its mission that does not involve Elon Musk." Conflicts over OpenAI's purpose have long simmered at the San Francisco institute, contributing to Musk quitting in 2018, Altman's short-lived ouster in 2023 and other high-profile departures. Hedley, a lawyer by training, worked for OpenAI in 2017 and 2018, a time when the nonprofit was still navigating the best ways to steward the technology it wanted to build. As recently as 2023, Altman said advanced AI held promise but also warned of extraordinary risks, from drastic accidents to societal disruptions. In recent years, however, Hedley said he watched with concern as OpenAI, buoyed by the success of ChatGPT, was increasingly cutting corners on safety testing and rushing out new products to get ahead of business competitors. "The costs of those decisions will continue to go up as the technology becomes more powerful," he said. "I think that in the new structure that OpenAI wants, the incentives to rush to make those decisions will go up and there will no longer be anybody really who can tell them not to, tell them this is not OK." Software engineer Anish Tondwalkar, a former member of OpenAI's technical team until last year, said an important assurance in OpenAI's nonprofit charter is a "stop-and-assist clause" that directs OpenAI to stand down and help if another organization is nearing the achievement of better-than-human AI. "If OpenAI is allowed to become a for-profit, these safeguards, and OpenAI's duty to the public can vanish overnight," Tondwalkar said in a statement Wednesday. Another former worker who signed the letter puts it more bluntly. "OpenAI may one day build technology that could get us all killed," said Nisan Stiennon, an AI engineer who worked at OpenAI from 2018 to 2020. "It is to OpenAI's credit that it's controlled by a nonprofit with a duty to humanity. This duty precludes giving up that control."
[8]
Former OpenAI staff and AI experts ask Attorneys General to block for-profit conversion - SiliconANGLE
Former OpenAI staff and AI experts ask Attorneys General to block for-profit conversion Former employees of OpenAI, along with 30 AI experts, have published an open letter that is asking the attorneys general in California and Delaware to stop the company from restructuring into a "for-profit benefit corporation." Nobel laureate and former Google Brain leader Geoffrey Hinton has joined forces with fellow AI pioneers Yoshua Bengio and Stuart Russell, along with 11 OpenAI staffers - including four current (anonymous) and seven former employees - to urge the company to steer clear of potentially unethical business ventures. Among the signatories are Ramana Kumar, formerly of Google DeepMind, and Neel Nanda, a current DeepMind researcher and former Anthropic employee. They join Elon Musk, who has also stated that the transfer from a non-profit model to a for-profit one is not the mission he helped fund, which resulted in him launching a lawsuit. Musk's personal mission is somewhat dubious, given that he also owns an AI company. Perhaps for that reason, today Hinton said, "I'm happy there is an effort to hold OpenAI to its mission that does not involve Elon Musk." In the letter, the signatories claim that the development of artificial general intelligence, AGI, should not be in the hands of a company driven by making money. AGI is the concept of when AI becomes equal or above humans in terms of its intelligence, which they say should not be for private gain but for the benefit of all people. OpenAI's founding mission was based on just that - "the goal of building safe and beneficial artificial general intelligence for the benefit of humanity." In 2017, when OpenAI's Chief Executive, Sam Altman, was speaking in London, he doubled down on this, saying, "We don't ever want to be making decisions to benefit shareholders. The only people we want to be accountable to is humanity as a whole." The benefits of AI, say the signatories, are "unprecedented," but they warn that in the wrong hands AI might worsen "existing inequalities, to manipulation and misinformation," while "potentially resulting in human extinction." They cite a number of experts, AI companies, and governments that separately have aired concerns about this potentially existentially serious conundrum. "AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this," the letter added. OpenAI has become a giant, presently worth $300 billion. The writers also believe that current and former employees are hampered in speaking about their concerns because of "broad confidentiality agreements." They say companies bear almost no pressure to share information with governments or the public, effectively rendering world-changing technology in the hands of a few people. "We do not think they can all be relied upon to share it voluntarily," they said. In response, OpenAI said any changes to the company "would be in service of ensuring the broader public can benefit from AI." It added, "This structure will continue to ensure that as the for-profit succeeds and grows, so too does the non-profit, enabling us to achieve the mission."
[9]
Former OpenAI staff and AI experts ask Attorneys General to block for-profit conversation - SiliconANGLE
Former OpenAI staff and AI experts ask Attorneys General to block for-profit conversation Former employees of OpenAI, along with 30 AI experts, have published an open letter that is asking the attorneys general in California and Delaware to stop the company from restructuring into a "for-profit benefit corporation." Nobel laureate and former Google Brain leader Geoffrey Hinton has joined forces with fellow AI pioneers Yoshua Bengio and Stuart Russell, along with 11 OpenAI staffers - including four current (anonymous) and seven former employees - to urge the company to steer clear of potentially unethical business ventures. Among the signatories are Ramana Kumar, formerly of Google DeepMind, and Neel Nanda, a current DeepMind researcher and former Anthropic employee. They join Elon Musk, who has also stated that the transfer from a non-profit model to a for-profit one is not the mission he helped fund, which resulted in him launching a lawsuit. Musk's personal mission is somewhat dubious, given that he also owns an AI company. Perhaps for that reason, today Hinton said, "I'm happy there is an effort to hold OpenAI to its mission that does not involve Elon Musk." In the letter, the signatories claim that the development of artificial general intelligence, AGI, should not be in the hands of a company driven by making money. AGI is the concept of when AI becomes equal or above humans in terms of its intelligence, which they say should not be for private gain but for the benefit of all people. OpenAI's founding mission was based on just that - "the goal of building safe and beneficial artificial general intelligence for the benefit of humanity." In 2017, when OpenAI's Chief Executive, Sam Altman, was speaking in London, he doubled down on this, saying, "We don't ever want to be making decisions to benefit shareholders. The only people we want to be accountable to is humanity as a whole." The benefits of AI, say the signatories, are "unprecedented," but they warn that in the wrong hands AI might worsen "existing inequalities, to manipulation and misinformation," while "potentially resulting in human extinction." They cite a number of experts, AI companies, and governments that separately have aired concerns about this potentially existentially serious conundrum. "AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this," the letter added. OpenAI has become a giant, presently worth $300 billion. The writers also believe that current and former employees are hampered in speaking about their concerns because of "broad confidentiality agreements." They say companies bear almost no pressure to share information with governments or the public, effectively rendering world-changing technology in the hands of a few people. "We do not think they can all be relied upon to share it voluntarily," they said. In response, OpenAI said any changes to the company "would be in service of ensuring the broader public can benefit from AI." It added, "This structure will continue to ensure that as the for-profit succeeds and grows, so too does the non-profit, enabling us to achieve the mission."
[10]
Ex-OpenAI workers ask AGs to block for-profit conversion of ChatGPT maker
Former employees of OpenAI are appealing to the top law enforcement officers in California and Delaware to halt the company's move to transfer control of its artificial intelligence technology from a nonprofit charity to a for-profit business Former employees of OpenAI are asking the top law enforcement officers in California and Delaware to stop the company from shifting control of its artificial intelligence technology from a nonprofit charity to a for-profit business. They're concerned about what happens if the ChatGPT maker fulfills its ambition to build AI that outperforms humans, but is no longer accountable to its public mission to safeguard that technology from causing grievous harms. "Ultimately, I'm worried about who owns and controls this technology once it's created," said Page Hedley, a former policy and ethics adviser at OpenAI, in an interview with The Associated Press. Backed by three Nobel Prize winners and other advocates and experts, Hedley and nine other ex-OpenAI workers sent a letter this week to the two state attorneys general. The coalition is asking California Attorney General Rob Bonta and Delaware Attorney General Kathy Jennings, both Democrats, to use their authority to protect OpenAI's charitable purpose and block its planned restructuring. OpenAI is incorporated in Delaware and operates out of San Francisco. OpenAI said in response that "any changes to our existing structure would be in service of ensuring the broader public can benefit from AI." It said its for-profit will be a public benefit corporation, similar to other AI labs like Anthropic and tech billionaire Elon Musk's xAI, except that OpenAI will still preserve a nonprofit arm. "This structure will continue to ensure that as the for-profit succeeds and grows, so too does the nonprofit, enabling us to achieve the mission," the company said in a statement. The letter is the second petition to state officials this month. The last came from a group of labor leaders and nonprofits focused on protecting OpenAI's billions of dollars of charitable assets. Jennings said last fall she would "review any such transaction to ensure that the public's interests are adequately protected." Bonta's office sought more information from OpenAI late last year but has said it can't comment, even to confirm or deny if it is investigating. OpenAI's co-founders, including current CEO Sam Altman and Musk, originally started it as a nonprofit research laboratory on a mission to safely build what's known as artificial general intelligence, or AGI, for humanity's benefit. Nearly a decade later, OpenAI has reported its market value as $300 billion and counts 400 million weekly users of ChatGPT, its flagship product. OpenAI already has a for-profit subsidiary but faces a number of challenges in converting its core governance structure. One is a lawsuit from Musk, who accuses the company and Altman of betraying the founding principles that led the Tesla CEO to invest in the charity. While some of the signatories of this week's letter support Musk's lawsuit, Hedley said others are "understandably cynical" because Musk also runs his own rival AI company. The signatories include two Nobel-winning economists, Oliver Hart and Joseph Stiglitz, as well as AI pioneers and computer scientists Geoffrey Hinton, who won last year's Nobel Prize in physics, and Stuart Russell. "I like OpenAI's mission to 'ensure that artificial general intelligence benefits all of humanity,' and I would like them to execute that mission instead of enriching their investors," Hinton said in a statement Wednesday. "I'm happy there is an effort to hold OpenAI to its mission that does not involve Elon Musk." Conflicts over OpenAI's purpose have long simmered at the San Francisco institute, contributing to Musk quitting in 2018, Altman's short-lived ouster in 2023 and other high-profile departures. Hedley, a lawyer by training, worked for OpenAI in 2017 and 2018, a time when the nonprofit was still navigating the best ways to steward the technology it wanted to build. As recently as 2023, Altman said advanced AI held promise but also warned of extraordinary risks, from drastic accidents to societal disruptions. In recent years, however, Hedley said he watched with concern as OpenAI, buoyed by the success of ChatGPT, was increasingly cutting corners on safety testing and rushing out new products to get ahead of business competitors. "The costs of those decisions will continue to go up as the technology becomes more powerful," he said. "I think that in the new structure that OpenAI wants, the incentives to rush to make those decisions will go up and there will no longer be anybody really who can tell them not to, tell them this is not OK." Software engineer Anish Tondwalkar, a former member of OpenAI's technical team until last year, said an important assurance in OpenAI's nonprofit charter is a "stop-and-assist clause" that directs OpenAI to stand down and help if another organization is nearing the achievement of better-than-human AI. "If OpenAI is allowed to become a for-profit, these safeguards, and OpenAI's duty to the public can vanish overnight," Tondwalkar said in a statement Wednesday. Another former worker who signed the letter puts it more bluntly. "OpenAI may one day build technology that could get us all killed," said Nisan Stiennon, an AI engineer who worked at OpenAI from 2018 to 2020. "It is to OpenAI's credit that it's controlled by a nonprofit with a duty to humanity. This duty precludes giving up that control." ___ The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP's text archives.
[11]
Ex-OpenAI workers ask California and Delaware AGs to block for-profit conversion of ChatGPT maker
Former employees of OpenAI are asking the top law enforcement officers in California and Delaware to stop the company from shifting control of its artificial intelligence technology from a nonprofit charity to a for-profit business. They're concerned about what happens if the ChatGPT maker fulfills its ambition to build AI that outperforms humans, but is no longer accountable to its public mission to safeguard that technology from causing grievous harms. "Ultimately, I'm worried about who owns and controls this technology once it's created," said Page Hedley, a former policy and ethics adviser at OpenAI, in an interview with The Associated Press. Backed by three Nobel Prize winners and other advocates and experts, Hedley and nine other ex-OpenAI workers sent a letter this week to the two state attorneys general. The coalition is asking California Attorney General Rob Bonta and Delaware Attorney General Kathy Jennings, both Democrats, to use their authority to protect OpenAI's charitable purpose and block its planned restructuring. OpenAI is incorporated in Delaware and operates out of San Francisco. OpenAI said in response that "any changes to our existing structure would be in service of ensuring the broader public can benefit from AI." It said its for-profit will be a public benefit corporation, similar to other AI labs like Anthropic and tech billionaire Elon Musk's xAI, except that OpenAI will still preserve a nonprofit arm. "This structure will continue to ensure that as the for-profit succeeds and grows, so too does the nonprofit, enabling us to achieve the mission," the company said in a statement. The letter is the second petition to state officials this month. The last came from a group of labor leaders and nonprofits focused on protecting OpenAI's billions of dollars of charitable assets. Jennings said last fall she would "review any such transaction to ensure that the public's interests are adequately protected." Bonta's office sought more information from OpenAI late last year but has said it can't comment, even to confirm or deny if it is investigating. OpenAI's co-founders, including current CEO Sam Altman and Musk, originally started it as a nonprofit research laboratory on a mission to safely build what's known as artificial general intelligence, or AGI, for humanity's benefit. Nearly a decade later, OpenAI has reported its market value as $300 billion and counts 400 million weekly users of ChatGPT, its flagship product. OpenAI already has a for-profit subsidiary but faces a number of challenges in converting its core governance structure. One is a lawsuit from Musk, who accuses the company and Altman of betraying the founding principles that led the Tesla CEO to invest in the charity. While some of the signatories of this week's letter support Musk's lawsuit, Hedley said others are "understandably cynical" because Musk also runs his own rival AI company. The signatories include two Nobel-winning economists, Oliver Hart and Joseph Stiglitz, as well as AI pioneers and computer scientists Geoffrey Hinton, who won last year's Nobel Prize in physics, and Stuart Russell. "I like OpenAI's mission to 'ensure that artificial general intelligence benefits all of humanity,' and I would like them to execute that mission instead of enriching their investors," Hinton said in a statement Wednesday. "I'm happy there is an effort to hold OpenAI to its mission that does not involve Elon Musk." Conflicts over OpenAI's purpose have long simmered at the San Francisco institute, contributing to Musk quitting in 2018, Altman's short-lived ouster in 2023 and other high-profile departures. Hedley, a lawyer by training, worked for OpenAI in 2017 and 2018, a time when the nonprofit was still navigating the best ways to steward the technology it wanted to build. As recently as 2023, Altman said advanced AI held promise but also warned of extraordinary risks, from drastic accidents to societal disruptions. In recent years, however, Hedley said he watched with concern as OpenAI, buoyed by the success of ChatGPT, was increasingly cutting corners on safety testing and rushing out new products to get ahead of business competitors. "The costs of those decisions will continue to go up as the technology becomes more powerful," he said. "I think that in the new structure that OpenAI wants, the incentives to rush to make those decisions will go up and there will no longer be anybody really who can tell them not to, tell them this is not OK." Software engineer Anish Tondwalkar, a former member of OpenAI's technical team until last year, said an important assurance in OpenAI's nonprofit charter is a "stop-and-assist clause" that directs OpenAI to stand down and help if another organization is nearing the achievement of better-than-human AI. "If OpenAI is allowed to become a for-profit, these safeguards, and OpenAI's duty to the public can vanish overnight," Tondwalkar said in a statement Wednesday. Another former worker who signed the letter puts it more bluntly. "OpenAI may one day build technology that could get us all killed," said Nisan Stiennon, an AI engineer who worked at OpenAI from 2018 to 2020. "It is to OpenAI's credit that it's controlled by a nonprofit with a duty to humanity. This duty precludes giving up that control." ___ The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP's text archives.
[12]
Ex-OpenAI workers ask California and Delaware AGs to block for-profit conversion of ChatGPT maker
Former employees of OpenAI are asking the top law enforcement officers in California and Delaware to stop the company from shifting control of its artificial intelligence technology from a nonprofit charity to a for-profit business. They're concerned about what happens if the ChatGPT maker fulfills its ambition to build AI that outperforms humans, but is no longer accountable to its public mission to safeguard that technology from causing grievous harms. "Ultimately, I'm worried about who owns and controls this technology once it's created," said Page Hedley, a former policy and ethics adviser at OpenAI, in an interview with The Associated Press. Backed by three Nobel Prize winners and other advocates and experts, Hedley and nine other ex-OpenAI workers sent a letter this week to the two state attorneys general. The coalition is asking California Attorney General Rob Bonta and Delaware Attorney General Kathy Jennings, both Democrats, to use their authority to protect OpenAI's charitable purpose and block its planned restructuring. OpenAI is incorporated in Delaware and operates out of San Francisco. OpenAI said in response that "any changes to our existing structure would be in service of ensuring the broader public can benefit from AI." It said its for-profit will be a public benefit corporation, similar to other AI labs like Anthropic and tech billionaire Elon Musk's xAI, except that OpenAI will still preserve a nonprofit arm. "This structure will continue to ensure that as the for-profit succeeds and grows, so too does the nonprofit, enabling us to achieve the mission," the company said in a statement. The letter is the second petition to state officials this month. The last came from a group of labor leaders and nonprofits focused on protecting OpenAI's billions of dollars of charitable assets. Jennings said last fall she would "review any such transaction to ensure that the public's interests are adequately protected." Bonta's office sought more information from OpenAI late last year but has said it can't comment, even to confirm or deny if it is investigating. OpenAI's co-founders, including current CEO Sam Altman and Musk, originally started it as a nonprofit research laboratory on a mission to safely build what's known as artificial general intelligence, or AGI, for humanity's benefit. Nearly a decade later, OpenAI has reported its market value as $300 billion and counts 400 million weekly users of ChatGPT, its flagship product. OpenAI already has a for-profit subsidiary but faces a number of challenges in converting its core governance structure. One is a lawsuit from Musk, who accuses the company and Altman of betraying the founding principles that led the Tesla CEO to invest in the charity. While some of the signatories of this week's letter support Musk's lawsuit, Hedley said others are "understandably cynical" because Musk also runs his own rival AI company. The signatories include two Nobel-winning economists, Oliver Hart and Joseph Stiglitz, as well as AI pioneers and computer scientists Geoffrey Hinton, who won last year's Nobel Prize in physics, and Stuart Russell. "I like OpenAI's mission to 'ensure that artificial general intelligence benefits all of humanity,' and I would like them to execute that mission instead of enriching their investors," Hinton said in a statement Wednesday. "I'm happy there is an effort to hold OpenAI to its mission that does not involve Elon Musk." Conflicts over OpenAI's purpose have long simmered at the San Francisco institute, contributing to Musk quitting in 2018, Altman's short-lived ouster in 2023 and other high-profile departures. Hedley, a lawyer by training, worked for OpenAI in 2017 and 2018, a time when the nonprofit was still navigating the best ways to steward the technology it wanted to build. As recently as 2023, Altman said advanced AI held promise but also warned of extraordinary risks, from drastic accidents to societal disruptions. In recent years, however, Hedley said he watched with concern as OpenAI, buoyed by the success of ChatGPT, was increasingly cutting corners on safety testing and rushing out new products to get ahead of business competitors. "The costs of those decisions will continue to go up as the technology becomes more powerful," he said. "I think that in the new structure that OpenAI wants, the incentives to rush to make those decisions will go up and there will no longer be anybody really who can tell them not to, tell them this is not OK." Software engineer Anish Tondwalkar, a former member of OpenAI's technical team until last year, said an important assurance in OpenAI's nonprofit charter is a "stop-and-assist clause" that directs OpenAI to stand down and help if another organization is nearing the achievement of better-than-human AI. "If OpenAI is allowed to become a for-profit, these safeguards, and OpenAI's duty to the public can vanish overnight," Tondwalkar said in a statement Wednesday. Another former worker who signed the letter puts it more bluntly. "OpenAI may one day build technology that could get us all killed," said Nisan Stiennon, an AI engineer who worked at OpenAI from 2018 to 2020. "It is to OpenAI's credit that it's controlled by a nonprofit with a duty to humanity. This duty precludes giving up that control." ___ The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP's text archives.
[13]
Ex-OpenAI workers seek stay on for-profit conversion of ChatGPT maker
Former OpenAI employees, backed by Nobel laureates, are urging California and Delaware's attorneys generals to block the company's shift to a for-profit structure. They fear this restructuring will compromise OpenAI's commitment to AI safety and public benefit, potentially prioritising profit over responsible development. Concerns center on the control and accountability of advanced AI technology, particularly if it surpasses human capabilities.Former employees of OpenAI are asking the top law enforcement officers in California and Delaware to stop the company from shifting control of its artificial intelligence technology from a nonprofit charity to a for-profit business. They're concerned about what happens if the ChatGPT maker fulfills its ambition to build AI that outperforms humans, but is no longer accountable to its public mission to safeguard that technology from causing grievous harms. "Ultimately, I'm worried about who owns and controls this technology once it's created," said Page Hedley, a former policy and ethics adviser at OpenAI, in an interview with The Associated Press. Backed by three Nobel Prize winners and other advocates and experts, Hedley and nine other ex-OpenAI workers sent a letter this week to the two state attorneys general. The coalition is asking California Attorney General Rob Bonta and Delaware Attorney General Kathy Jennings, both Democrats, to use their authority to protect OpenAI's charitable purpose and block its planned restructuring. OpenAI is incorporated in Delaware and operates out of San Francisco. Also Read: OpenAI signals interest in buying Google's Chrome if breakup is ordered: ChatGPT exec testifies OpenAI said in response that "any changes to our existing structure would be in service of ensuring the broader public can benefit from AI." It said its for-profit will be a public benefit corporation, similar to other AI labs like Anthropic and tech billionaire Elon Musk's xAI, except that OpenAI will still preserve a nonprofit arm. "This structure will continue to ensure that as the for-profit succeeds and grows, so too does the nonprofit, enabling us to achieve the mission," the company said in a statement. The letter is the second petition to state officials this month. The last came from a group of labor leaders and nonprofits focused on protecting OpenAI's billions of dollars of charitable assets. Jennings said last fall she would "review any such transaction to ensure that the public's interests are adequately protected." Bonta's office sought more information from OpenAI late last year but has said it can't comment, even to confirm or deny if it is investigating. OpenAI's co-founders, including current CEO Sam Altman and Musk, originally started it as a nonprofit research laboratory on a mission to safely build what's known as artificial general intelligence, or AGI, for humanity's benefit. Nearly a decade later, OpenAI has reported its market value as $300 billion and counts 400 million weekly users of ChatGPT, its flagship product. OpenAI already has a for-profit subsidiary but faces a number of challenges in converting its core governance structure. One is a lawsuit from Musk, who accuses the company and Altman of betraying the founding principles that led the Tesla CEO to invest in the charity. While some of the signatories of this week's letter support Musk's lawsuit, Hedley said others are "understandably cynical" because Musk also runs his own rival AI company. The signatories include two Nobel-winning economists, Oliver Hart and Joseph Stiglitz, as well as AI pioneers and computer scientists Geoffrey Hinton, who won last year's Nobel Prize in physics, and Stuart Russell. "I like OpenAI's mission to 'ensure that artificial general intelligence benefits all of humanity,' and I would like them to execute that mission instead of enriching their investors," Hinton said in a statement Wednesday. "I'm happy there is an effort to hold OpenAI to its mission that does not involve Elon Musk." Conflicts over OpenAI's purpose have long simmered at the San Francisco institute, contributing to Musk quitting in 2018, Altman's short-lived ouster in 2023 and other high-profile departures. Also Read: Terrified of a recession, Gen Z in U.S turns to fast food deals, dumpster diving, and ChatGPT to cope without breaking the bank Hedley, a lawyer by training, worked for OpenAI in 2017 and 2018, a time when the nonprofit was still navigating the best ways to steward the technology it wanted to build. As recently as 2023, Altman said advanced AI held promise but also warned of extraordinary risks, from drastic accidents to societal disruptions. In recent years, however, Hedley said he watched with concern as OpenAI, buoyed by the success of ChatGPT, was increasingly cutting corners on safety testing and rushing out new products to get ahead of business competitors. "The costs of those decisions will continue to go up as the technology becomes more powerful," he said. "I think that in the new structure that OpenAI wants, the incentives to rush to make those decisions will go up and there will no longer be anybody really who can tell them not to, tell them this is not OK." Software engineer Anish Tondwalkar, a former member of OpenAI's technical team until last year, said an important assurance in OpenAI's nonprofit charter is a "stop-and-assist clause" that directs OpenAI to stand down and help if another organisation is nearing the achievement of better-than-human AI. "If OpenAI is allowed to become a for-profit, these safeguards, and OpenAI's duty to the public can vanish overnight," Tondwalkar said in a statement Wednesday. Another former worker who signed the letter puts it more bluntly. "OpenAI may one day build technology that could get us all killed," said Nisan Stiennon, an AI engineer who worked at OpenAI from 2018 to 2020. "It is to OpenAI's credit that it's controlled by a nonprofit with a duty to humanity. This duty precludes giving up that control." The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP's text archives.
[14]
Geoffrey Hinton, Ex-OpenAI Insiders, And Top AI Experts Sound Alarm On OpenAI's Restructuring: Warn It Could Strip Public Of Oversight, Betray AGI Mission - Microsoft (NASDAQ:MSFT)
A coalition of over 30 leading artificial intelligence researchers and ethicists has issued an urgent warning about OpenAI's proposed corporate restructuring, expressing grave concerns that the changes could undermine public oversight and the company's original mission. What Happened: The group published an open letter on the "Not For Private Gain" website urging California and Delaware Attorney Generals to intervene in OpenAI's plan to buy itself out from under its nonprofit's control. The letter argues that this restructuring would eliminate crucial governance safeguards designed to prioritize public benefit over commercial interests. "Removing nonprofit control over how AGI [Artificial general intelligence] is developed and governed would violate the special fiduciary duty owed to the nonprofit's beneficiaries and pose a palpable and identifiable threat to OpenAI's charitable purpose," the signatories stated. The expert coalition includes AI pioneer Geoffrey Hinton, notable former OpenAI researchers Steven Adler, Jacob Hilton, Daniel Kokotajlo, Gretchen Krueger, Girish Sastry, Scott Aaronson, Ryan Lowe, Nisan Stiennon, and Anish Tondwalkar. Harvard law professor Lawrence Lessig, UC Berkeley computer science professor Stuart Russell, and Hugging Face ethics scientist Margaret Mitchell also joined the effort. "Our board has been very clear: our nonprofit will be strengthened, and any changes to our existing structure would be in service of ensuring the broader public can benefit from AI," an OpenAI spokesperson told CNBC. See Also: OpenAI Eyes Chrome Takeover: Sam Altman Ready To Pounce If Google Forced To Sell The Web Browser, Executive Reveals Why It Matters: This intervention comes at a critical juncture as OpenAI must complete its restructuring by year-end to secure the full $40 billion funding round led by SoftBank Group Corp. The company plans to transform into a Delaware Public Benefit Corporation, allowing it to attract conventional equity investments while its nonprofit arm focuses on charitable initiatives. The AI experts' objections parallel ongoing litigation from OpenAI co-founder Elon Musk, whose xAI Corp now competes with OpenAI after raising $6 billion in funding. Microsoft Corp. MSFT, which has invested nearly $14 billion in OpenAI, could also be significantly impacted by any regulatory action affecting the restructuring. The dispute highlights growing tensions between OpenAI's original mission to ensure artificial general intelligence "benefits all of humanity" and its commercial ambitions, raising fundamental questions about who should control and govern powerful AI technologies with potentially transformative global impacts. Read Next: 'Musk Could Not Tolerate Seeing Such Success' -- OpenAI Says He Abandoned And Declared Them 'Doomed,' Then Tried to Destroy It All Image Via Shutterstock Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. MSFTMicrosoft Corp$375.100.19%Stock Score Locked: Want to See it? Benzinga Rankings give you vital metrics on any stock - anytime. Reveal Full ScoreEdge RankingsMomentum38.03Growth64.65Quality31.92Value15.40Price TrendShortMediumLongOverviewGot Questions? AskWhich AI companies will face scrutiny from regulators?How could OpenAI's restructuring impact its funding?What opportunities exist for investors in AI amid this turmoil?Will Microsoft adjust its strategy due to OpenAI's changes?Which public benefit corporations might benefit from OpenAI's shift?How might Elon Musk's xAI capitalize on OpenAI's challenges?What risks do AI venture capitalists face from this situation?Could regulatory actions create openings for new players in AI?Which ethical AI initiatives could gain traction from this debate?How will public sentiment influence investments in AI technologies?Powered ByMarket News and Data brought to you by Benzinga APIs
Share
Share
Copy Link
A group of AI experts, including Nobel laureates and former OpenAI employees, have appealed to the attorneys general of California and Delaware to prevent OpenAI from restructuring into a for-profit entity, citing concerns over the company's mission and public accountability.
A coalition of prominent AI researchers, Nobel laureates, and former OpenAI employees has launched a campaign to block OpenAI's proposed restructuring into a for-profit entity. The group, which includes Geoffrey Hinton, Stuart Russell, and Margaret Mitchell, has sent an open letter to the attorneys general of California and Delaware, urging them to intervene in the ChatGPT maker's plans 12.
The signatories argue that OpenAI's transition to a for-profit model would undermine its original mission of developing artificial general intelligence (AGI) for the benefit of humanity. They contend that the restructuring would dismantle crucial governance safeguards, such as capped investor returns and an independent board, which were designed to prioritize public interest over financial gain 13.
OpenAI currently operates under a unique structure with a nonprofit parent organization controlling a for-profit subsidiary. The proposed restructuring would convert the for-profit arm into a Delaware public benefit corporation (PBC), giving it full control over operations and business activities 14.
The letter emphasizes the attorneys general's responsibility to protect OpenAI's charitable purpose and safeguard the public interest. It questions whether alternative models were considered and if investor pressure played a role in the decision 15.
OpenAI maintains that the restructuring would strengthen its nonprofit arm and enable it to compete more effectively with well-funded rivals like Google and Meta. The company argues that the changes are necessary to secure additional investment and advance its mission 24.
This appeal is part of a broader movement against OpenAI's restructuring plans. Earlier, a group of nonprofits, including faith, labor, and community organizations, requested the California Attorney General to halt and investigate the transition 5. Additionally, Elon Musk, a co-founder of OpenAI, has filed a lawsuit to block the conversion 3.
Critics warn that allowing OpenAI to proceed with its restructuring could set a dangerous precedent for other nonprofits and undermine public trust in charitable organizations. They argue that the control over OpenAI's technology is too valuable to be transferred to a profit-driven entity 5.
The attorneys general of California and Delaware have the authority to approve or block the restructuring, as OpenAI is incorporated in Delaware and operates from San Francisco. While the Delaware AG has stated they would review the transaction to protect public interests, the California AG's office has not commented on any potential investigation 35.
As the debate intensifies, the outcome of this controversy could have far-reaching implications for the governance of AI development and the balance between technological advancement and public accountability in the rapidly evolving field of artificial intelligence.
Reference
[1]
[2]
Financial Times News
|Ex-OpenAI staff and top AI experts seek to block proposed for-profit restructure[3]
[5]
OpenAI, once a non-profit AI research organization, is restructuring into a for-profit entity, raising concerns about its commitment to beneficial AI development and potential safety implications.
7 Sources
7 Sources
OpenAI, valued at $157 billion, is contemplating a shift from its nonprofit structure to a for-profit model, raising questions about its commitment to its original mission and the potential legal and ethical implications of such a change.
22 Sources
22 Sources
OpenAI, the artificial intelligence company behind ChatGPT, is reportedly exploring changes to its corporate structure to make it more appealing to investors. This move could potentially remove the cap on investor returns and alter the company's governance.
5 Sources
5 Sources
OpenAI, the artificial intelligence research company, is reportedly considering a significant change in its corporate structure. The potential shift from a nonprofit to a for-profit model comes as the company's valuation reaches $150 billion, sparking discussions about its future direction and mission.
6 Sources
6 Sources
OpenAI announces a shift towards a for-profit structure, citing the need for substantial capital to compete in AI development. The move aims to attract more investors while maintaining its mission through a public benefit corporation model.
34 Sources
34 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved