4 Sources
4 Sources
[1]
'We will learn quickly and course-correct' -- Sam Altman says this is OpenAI's future, but it's not the one it started with
Sam Altman posts a new 'Our Principles' document, and it's a big shift OpenAI has just published a new document called 'Our Principles' written by Sam Altman, and at first glance it reads like a simple corporate manifesto update. But the more I read it, the more I start to think that it isn't what it appears to be. Something has definitely changed here, compared to OpenAI's previous statements, so let's unpack what it is. We can start by taking a look at what it is not saying, as well as what it is saying. The race of AGI -- is it still happening? For a start, the post is very deliberately credited to Sam Altman, as if it's something of a personal mission statement, which made me want to compare it to his previous blogs on AI. What immediately struck me as curious about the new principles document, compared to his old blogs, is the lack of reference to Artificial General Intelligence (AGI). Achieving AGI for the benefit of humanity was, after all, the whole goal of creating OpenAI in the first place, but it is only mentioned in passing a few times in the new document. It seems to have been replaced by talking about broader AI deployment instead. Eleven months ago, Altman was talking in much stronger terms about AGI in his personal blog: "We are past the event horizon;", he wrote, "the takeoff has started. Humanity is close to building digital superintelligence, and at least so far it's much less weird than it seems like it should be." Reading it back now it sounds like AGI was about to happen at any moment. Compared to the latest document, the language has softened significantly. If the takeoff had started, it seems the AGI rocket is still on the landing pad. AI: The risks vs the rewards While the broader benefits of AI are stressed -- "a lot of the things we've only let ourselves dream about in sci-fi could become reality", Altman says -- there's also the acknowledgement that these good outcomes aren't guaranteed. "Power in the future can either be held by a small handful of companies using and controlling superintelligence, or it can be held in a decentralized way by people", Altman says. "We believe the latter is much better, and our goal is to put truly general AI in the hands of as many people as possible." Later the document raises more of the dangers of AI, particularly regarding pathogens: "No AI lab can ensure a good future alone. For an obvious example, there may be extremely capable models that make it easier to create a new pathogen, and we need a society-wide approach to defend against this with pathogen-agnostic countermeasures." I was left wondering whether to be excited or scared for the future. And that feeling of contradiction started to increase the more I read... Competitive spirit While the document initially reads as collaborative, and there is a lot of talk about getting AI into the hands of everyone: "We want a future where everyone can have an excellent life", Altman says, the question I have is how that happens. Altman is talking about distribution, and in practice that usually means shipping faster, integrating it into more products and reaching more users -- all things which define competition in tech. For that to happen in practice, it feels like OpenAI is going to have to become more competitive than it already is. And when Altman says later, "We will learn quickly and course-correct", he means shipping, learning, improving and repeating, which sits uneasily next to the new documents heavy safety framing. Even when he says "We deserve an enormous amount of scrutiny" -- it sounds humble on the surface, but strategically it's perhaps more about OpenAI's willingness to justify decisions publicly. It's the kind of language companies tend to use when they're under increasing scrutiny. In fact, the more I read the whole document, the more it starts to sound contradictory. Safety implies restraint and deliberation while scale implies speed and iteration. After reading the document I'm still not entirely sure what OpenAI's principles actually are. Principles should be clear and emphatic, while this document feels softer and more flexible, giving OpenAI more room for maneuver than it had before. It's hard not to think of the famous quote by Groucho Marx "Those are my principles, and if you don't like them... well, I have others." Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds.
[2]
OpenAI just changed its principals. Here's what's changing
What has changed in almost a decade of OpenAI's mission? Here's a look at the company's new 'principles' document. OpenAI is less concerned with artificial general intelligence (AGI) than it was almost a decade ago and is instead prioritising a broader rollout of its technology, according to a new mission statement for the company. On Sunday, OpenAI published an update to the company's "Our Principles" document, which sets out how the company will run its technology in the future. There are some key differences between this new set of principles and what the company prioritised almost a decade ago, when it was a nascent non-profit artificial intelligence (AI) research organisation. De-emphasis on artificial general intelligence In 2018, OpenAI was staunchly focused on artificial general superintelligence (AGI): the idea that their technology would surpass human intelligence, but now, it is just part of the company's wider AI rollout. Both versions of the company's principles say that OpenAI's mission is to guarantee this technology "benefits all of humanity," but the 2018 version explicitly mentions building it safely and beneficially. "Our primary fiduciary duty is to humanity," the document reads. "We anticipate needing to marshal substantial resources to fulfil our mission, but will always diligently act to minimise conflicts of interest among our employees and stakeholders that could compromise broad benefit." The 2026 version, however, said it needs to continue to build safe systems, but that society needs to contend with "each successive level of AI capability, understand it, integrate it, and figure out the best path forward together." The way forward, as CEO and cofounder Sam Altman sees it in 2026, is to democratise AI at all levels by giving everyone access to it and resisting the idea that the technology could "consolidate power in the hands of the few". The 2026 principles document also said that it expects OpenAI to work with governments, international agencies and other AGI initiatives to "sufficiently solve serious alignment, safety or societal problems before proceeding further" with its work. Some examples of doing that could include using ChatGPT to fight back against models that could create new pathogens or integrate cyber-resilient models into critical infrastructure. Altman gave some clues for OpenAI's de-emphasis of AGI on his personal blog earlier this month. AGI has a "ring of power" to it that "makes people do crazy things," Altman wrote. To fight back, he said the only solution is to "orient towards sharing the technology with people broadly, and for no one to have the ring." OpenAI will no longer step aside to compete with a safety product In 2018, OpenAI said it was concerned that AGI development was becoming "a competitive race without time for adequate safety precautions." It was committed to putting a stop to its own models to assist any project that was "value-aligned, [and] a safety-conscious project," which comes closer to building AGI. "We will work out specifics... but a typical triggering condition might be a 'better-than-even chance of success in the next two years," the 2018 document reads. In 2026, there is no mention of stepping aside to help a greater cause. Instead, the document acknowledges that OpenAI "is a much larger force in the world than it was a few years ago," and pledges to be transparent about when and how its operating principles could change. The company has been in major competition with several rivals, including Anthropic. In February, Anthropic refused to give the US President Donald Trump's administration unfettered access to its AI for the military, which led to the company being labelled a supply chain risk and ordered federal agents to stop using Anthropic's AI assistant Claude in March. On February 28, OpenAI then stepped in to fill the void, signing a deal with the Department of War, which saw some users boycotting ChatGPT in favour of Claude. Anthropic was also valued this month at $800 billion (β¬696 billion), on par with OpenAI. Vague society-wide callouts In the 2026 document, OpenAI asks for several societal changes so the world can better adapt to AI. "We envision a world with widespread flourishing at a level that is currently difficult to imagine," the document reads. "A lot of the things we've only let ourselves dream about in sci-fi could become reality, and most people could live more meaningful lives than most are able to today." This future is not guaranteed, because AI can either be "held by a small handful of companies using and controlling superintelligence," or "held in a decentralised way by people," the document reads. The principles document also reiterates some of OpenAI's recent policy suggestions, such as asking governments to consider "new economic models" and to develop new technology that will drive down the costs of AI infrastructure. "A lot of the things that we do that look weird -- buying huge amounts of compute while our revenue is relatively small... are driven by our fundamental belief in a future of universal prosperity," the document reads.
[3]
Sam Altman outlines five principles for OpenAI's AGI development
OpenAI CEO Sam Altman has revealed five core principles guiding the development of artificial general intelligence. These principles aim to ensure AGI benefits everyone. They focus on making AI accessible to many, empowering individuals, fostering universal prosperity, building resilience against risks, and maintaining adaptability as the technology evolves. In a blog post published on Sunday, OpenAI CEO Sam Altman shared a set of five guiding principles that he says will shape the development of artificial general intelligence (AGI). "Power in the future can either be held by a small handful of companies using and controlling superintelligence, or it can be held in a decentralised way by people. We believe the latter is much better, and our goal is to put truly general AI in the hands of as many people as possible," Altman said. "Our mission is to ensure that AGI benefits all of humanity. Here are the principles that guide our work," he added. The principles The five principles are: democratisation, empowerment, universal prosperity, resilience, and adaptability. Democratisation: Altman explained that access alone is not enough. Decisions about how AI is used and developed should also be guided by democratic and fair processes, rather than being controlled solely by major AI labs. "We will resist the potential of this technology to consolidate power in the hands of the few," he said. Empowerment: The second principle focusses on helping individuals make the most of AI. "We believe AI can empower everyone to achieve their goals, learn more, be happier and more fulfilled, and pursue their dreams, and that society as a whole will benefit from this". He added that this means building tools that are both useful and safe, with careful development to minimise risks and prevent harm while gradually expanding their capabilities. Universal prosperity: Altman said that giving people easy-to-use AI with a lot of computing power can unlock new ways to create value and improve the quality of life, especially through scientific discovery. He noted that to ensure these benefits are widely shared, governments may need new economic approaches, and there must be significant investment to make AI infrastructure cheaper. "A lot of the things that we do that look weird -- buying huge amounts of compute while our revenue is relatively small, vertically integrating to lower costs and make our technology easier to use, pushing to build data centers all around the world, and much more -- are driven by our fundamental belief in a future of universal prosperity," he added. Resilience: Altman said AI will bring new risks, and addressing them will require close collaboration with governments, companies, and society. "We will make significant use of our Foundation's resources to support this work," he noted. He added that no one organisation can ensure safety, and stressed the need to carefully roll out AI, strengthen security, and work together to solve major safety challenges. Adaptability: Finally, he said that the company believes the best way to handle an uncertain future is to keep adapting as it learns more, adding that as OpenAI has grown, it will be transparent about when and why its approach changes. "As a concrete example, while we are quite confident that universal prosperity will remain really important, we can imagine periods in the future where we have to trade off some empowerment for more resilience," he said. Also Read: Musk vs Altman: Silicon Valley's biggest trial opens -- here are the key witnesses
[4]
Sam Altman's 5 AGI principles vs his track record: Does it add up?
AI safety promises clash with diluted superalignment compute and Pentagon deal Pausing the pace of launching new GPTs for a brief moment, OpenAI published a document outlining something more foundational. A mission statement of sorts by Sam Altman, specifically with respect to AGI development. According to the document posted under Sam Altman's name on OpenAI's website, the ChatGPT maker's path to artificial general intelligence will be based on five key principles: Democratization, Empowerment, Universal Prosperity, Resilience, and Adaptability. The document is a show of intent from Sam Altman, committing OpenAI to do several key things. Chief among them is to resist power consolidation in the hands of a few, and make sure key decisions about AI are made through democratic processes - not just by AI labs alone. Also read: Sam Altman says AGI will lead to economy collapse, here is why Of course, this document couldn't be more ironic. Especially, when juxtaposed against Sam Altman's own track record over the past couple of years. When you compare Altman's actions against what he's outlining in this new OpenAI policy document, the contradictions are hard to miss. I've attempted to outline a few important ones below. The first real concern about this new sermon on AGI from Sam Altman is about democratization itself. Following the November 2023 board crisis, Altman returned to a reshaped OpenAI board far more tightly aligned with him, which makes his firing in future extremely unlikely. In April 2026's bombshell investigation by The New Yorker, Sam Altman is described as exhibiting "a relentless will to power" that's unusually powerful even among Silicon Valley's most ambitious industrialists. I'll let you guess how democratic any process towards AGI will be, if that's the case. The second tension is related to democratic process in the form of governance. According to Altman's new AGI principles, democratic governance is non-negotiable. But Altman has been repeatedly alleged of publicly advocating for AI regulation while privately working to weaken AI safety rules. After advocacy groups questioned OpenAI's restructure, the company served legal notices to at least seven of them, including the San Francisco Foundation, among others. Sending legal notices to civil-society critics is hardly the sign of a company interested in democratic decision-making, right? According to a blog post, Sam Altman had previously pledged to allocate 20% of OpenAI's compute to a superalignment team focused on mitigating AI risk. However, in practice, the team reportedly received a fraction of that and that too outdated hardware, as top resources were prioritised for pushing out commercial products, according to a Fortune report. Multiple reports allege that since Altman's 2023 ouster and resurgent return, OpenAI has led the AI industry in subordinating safety to speed-to-market. A specific contradiction exists on defence-related contracts, if you contextualise the AGI principles. OpenAI strictly prohibits domestic mass surveillance and fully autonomous weapons development among its safety principles, but its February 2026 Pentagon agreement did not make these legally binding. What is the point of safety principles on future AGI development, if they can't stand the test of time today? Writing a new policy document is easy, posting it on the company's blog even easier, but what Sam Altman needs to know is that there's clearly a trust deficit between him as the posterboy of AI and the industry at large. And to earn back that lost trust, his actions will have to speak louder than any words posted on OpenAI's website about anything related to AI.
Share
Share
Copy Link
Sam Altman published OpenAI's Our Principles document, outlining five core principles for AGI development: democratization, empowerment, universal prosperity, resilience, and adaptability. The document marks a shift from OpenAI's original AGI-focused mission toward broader AI deployment, but critics point to contradictions between Altman's stated principles and his track record on AI safety initiatives and governance.
Sam Altman has published a new document titled "Our Principles" that outlines OpenAI's approach to Artificial General Intelligence and broader AI deployment
1
. The document, deliberately credited to Altman personally, presents five core principles: democratization, empowerment, universal prosperity, resilience, and adaptability3
. "Power in the future can either be held by a small handful of companies using and controlling superintelligence, or it can be held in a decentralized way by people," Altman stated, emphasizing OpenAI's commitment to decentralizing AI power3
.
Source: Digit
The most striking change in OpenAI's Our Principles document is the de-emphasis on achieving AGI, which was the company's founding purpose almost a decade ago
2
. While the 2018 version explicitly stated that OpenAI's "primary fiduciary duty is to humanity" and focused on building AGI safely and beneficially, the 2026 version treats AGI as just part of the company's wider AI rollout2
. Just eleven months ago, Altman wrote on his personal blog that "the takeoff has started" and humanity was "close to building digital superintelligence"1
. The language has softened significantly, suggesting the AGI rocket remains on the landing pad despite earlier predictions.Altman's democratization principle emphasizes that access alone isn't enough, arguing that decisions about AI development should be guided by democratic processes rather than controlled solely by major AI labs
3
. The document envisions "a world with widespread flourishing at a level that is currently difficult to imagine," where "a lot of the things we've only let ourselves dream about in sci-fi could become reality"2
. To achieve universal prosperity, Altman noted that governments may need "new economic models" and significant investment to lower AI infrastructure costs3
. OpenAI's strategy of "buying huge amounts of compute while our revenue is relatively small" and "vertically integrating to lower costs" reflects this commitment to making AI accessible3
.
Source: Euronews
Related Stories
The resilience principle addresses the benefits and risks of AI, acknowledging that "no AI lab can ensure a good future alone" . Altman specifically mentioned risks from "extremely capable models that make it easier to create a new pathogen," requiring society-wide defense approaches
1
. However, critics point to contradictions between stated principles and actions. Altman had previously pledged to allocate 20% of OpenAI's compute to a superalignment team focused on mitigating AI risk, but the team reportedly received only a fraction with outdated hardware as resources prioritized commercial products4
. OpenAI's February 2026 Pentagon agreement raised additional concerns, as it didn't make the company's prohibitions on domestic mass surveillance and fully autonomous weapons legally binding4
.Notably absent from the 2026 document is OpenAI's 2018 commitment to step aside and assist any "value-aligned, safety-conscious project" that comes closer to building AGI
2
. Instead, the document acknowledges OpenAI "is a much larger force in the world than it was a few years ago"2
. Following the November 2023 board crisis, Altman returned to a reshaped board more tightly aligned with him, and after advocacy groups questioned OpenAI's restructure, the company served legal notices to at least seven of them4
. The adaptability principle states "we will learn quickly and course-correct," which suggests shipping, learning, and iterating rapidly1
. This sits uneasily next to the document's heavy safety framing, as AI regulation experts note that safety implies restraint while scale implies speed. As one analysis concluded, there's clearly a trust deficit between Altman and the industry, and earning back that trust will require actions that speak louder than words posted on ChatGPT maker's website4
.
Source: TechRadar
Summarized by
Navi
1
Technology

2
Science and Research

3
Technology
