Curated by THEOUTPOST
On Mon, 10 Feb, 8:00 AM UTC
6 Sources
[1]
Sam Altman pledges more openness as OpenAI works toward AGI - SiliconANGLE
OpenAI Chief Executive Sam Altman has written an essay on his personal blog today that spells out how his company intends to ensure that "everyone on Earth" will be able to leverage artificial general intelligence to fulfill their goals and expand their creativity. Part of that plan entails "strange-sounding ideas" like giving everyone a "compute budget" to make certain that the benefits of AGI are widely distributed, he said. "The historical impact of technological progress suggests that most of the metrics we care about (health outcomes, economic prosperity, etc.) get better on average and over the long-term, but increasing equality does not seem technologically determined and getting this right may require new ideas," wrote Altman (pictured). "In particular, it does seem like the balance of power between capital and labor could easily get messed up, and this may require early intervention." Altman defines AGI as "a system that can tackle increasingly complex problems, at human level, in many fields," and he claims that we're near to creating such systems. His claims may stoke concerns that it might ultimately lead to mass unemployment in many industries, and Altman admitted that its arrival is going to need "lots of human supervision and direction" to avoid this kind of disruption. However, he insisted that AGI systems "will not have the biggest new ideas," and that it will be "great at some things but surprisingly bad at others." The real value of AGI will be realized when we start running such systems at enormous scales. Altman said he envisions the possibility of millions of hyperscale AI systems that can tackle problems "in every field of knowledge work." Such systems won't come cheap, Altman conceded. He noted that the progress seen so far in the AI industry shows that it's possible to spend "arbitrary amounts of money and get continuous and predictable gains" in terms of its performance. That explains why OpenAI is currently said to be in talks over an eye-watering $40 billion funding round, just three months after closing on $6.6 billion. The company has also pledged, alongside partners such as Oracle Corp., to invest up to $500 billion on data centers through U.S. President Donald Trump's Project Stargate. However, Altman insists that though AI development requires staggering amounts of money, the cost of using it falls by about 10 times every 12 months. So while building more powerful AI systems will cost money, that technology will become increasingly accessible to everyone. The rise of Chinese AI startup DeepSeek Ltd. and others seems to support that argument, and also the idea that developing and training powerful AI systems might become more affordable too, but Altman insisted that massive investments will be required to achieve AGI. If and when OpenAI can create an AGI-level system, the company will no doubt be compelled to make some "major decisions" and also introduce some "limitations" relating to safety that may well prove to be unpopular, Altman said. Previously, OpenAI has pledged that it would stop competing with, and start assisting any value-aligned and safety-conscious project that gets closer to creating true AGI than it does, in order to ensure its safe development. But that pledge was made when OpenAI was still committed to its nonprofit status. These days, the company is trying to restructure itself as a for-profit organization that hopes to achieve $100 billion in annual revenue by 2029. As such, Altman said the company's mission now is focused "more towards individual empowerment" as it works toward AGI, while making efforts to prevent such systems from being used by authoritarian governments to "control their population through mass surveillance" and forestall the "loss of autonomy." Such an approach will likely mean OpenAI has to commit to more openness about its AI systems, Altman said. Recently, he admitted that OpenAI has been "on the wrong side of history" in terms of open source, keeping the codebase and training data of its most powerful AI systems under close wraps. "Many of us expect to need to give people more control over the technology than we have historically, including open-sourcing more, and accept that there is a balance between safety and individual empowerment that will require trade-offs," Altman said.
[2]
Sam Altman pledges more openness as OpenAI works towards AGI - SiliconANGLE
Sam Altman pledges more openness as OpenAI works towards AGI OpenAI Chief Executive Sam Altman has written an essay on his personal blog that spells out how his company intends to ensure that "everyone on Earth" will be able to leverage artificial general intelligence to fulfill their goals and expand their creativity. Part of that plan entails "strange-sounding ideas" like giving everyone a "compute budget" to make certain that the benefits of AGI are widely distributed, he said. "The historical impact of technological progress suggests that most of the metrics we care about (health outcomes, economic prosperity, etc.) get better on average and over the long-term, but increasing equality does not seem technologically determined and getting this right may require new ideas," Altman (pictured) said. "In particular, it does seem like the balance of power between capital and labor could easily get messed up, and this may require early intervention." Altman defines AGI as "a system that can tackle increasingly complex problems, at human level, in many fields", and he claims that we're near to creating such systems. His claims may stoke concerns that it might ultimately lead to mass unemployment in many industries, and Altman admitted that its arrival is going to need "lots of human supervision and direction" to avoid this kind of disruption. However, he insisted that AGI systems "will not have the biggest new ideas", and that it will be "great at some things but surprisingly bad at others". The real value of AGI will be realized when we start running such systems at enormous scales. Altman said he envisions the possibility of millions of hyperscale AI systems that can tackle problems "in every field of knowledge work". Such systems won't come cheap, Altman conceded. He noted that the progress seen so far in the AI industry shows that it's possible to spend "arbitrary amounts of money and get continuous and predictable gains" in terms of its performance. That explains why OpenAI is currently said to be in talks over an eye-watering $40 billion funding round, just three months after closing on $6.6 billion. The company has also pledged, alongside partners such as Oracle Corp., to invest up to $500 billion on data centers through U.S. President Donald Trump's Project Stargate. However, Altman insists that while AI development requires staggering amounts of money, the cost of using it falls by about ten-times every 12 months. So while building more powerful AI systems will cost money, that technology will become increasingly accessible to everyone. The rise of Chinese AI startup DeepSeek Ltd. and others seems to support that argument, and also the idea that developing and training powerful AI systems might become more affordable too, but Altman insisted that massive investments will be required to achieve AGI. If and when OpenAI can create an AGI-level system, the company will no doubt be compelled to make some "major decisions" and also introduce some "limitations" relating to safety that may well prove to be unpopular, Altman said. Previously, OpenAI has pledged that it would stop competing with, and start assisting any value-aligned and safety-conscious project that gets closer to creating true AGI than it does, in order to ensure its safe development. But that pledge was made when OpenAI was still committed to its nonprofit status. These days, the company is trying to restructure itself as a for-profit organization that hopes to achieve $100 billion in annual revenue by 2029. As such, Altman said the company's mission now is focused "more towards individual empowerment" as it works towards AGI, while making efforts to prevent such systems from being used by authoritarian governments to "control their population through mass surveillance" and forestall the "loss of autonomy". Such an approach will likely mean OpenAI has to commit to more openness about its AI systems, Altman said. Recently, he admitted that OpenAI has been "on the wrong side of history" in terms of open-source, keeping the codebase and training data of its most powerful AI systems under close wraps. "Many of us expect to need to give people more control over the technology than we have historically, including open-sourcing more, and accept that there is a balance between safety and individual empowerment that will require trade-offs," Altman said.
[3]
AI Likely to Increase Inequality, Sam Altman Admits, or Control the "Population Through Mass Surveillance and Loss of Autonomy"
In a new post on his personal blog, OpenAI CEO Sam Altman warned that AI could lead to plenty of economic inequality while allowing authoritarian governments to "control their population through mass surveillance and loss of autonomy." OpenAI has made it its goal to realize "artificial general intelligence" -- the still-entirely-hypothetical point at which an AI can achieve and surpass the intellectual capabilities of a human -- that "benefits all of humanity." If anyone does manage to build AGI, ensuring that the "socioeconomic value" benefits all equally may prove far more difficult, especially after AI causes countless people to lose their jobs. "In particular, it does seem like the balance of power between capital and labor could easily get messed up, and this may require early intervention," Altman argued. In response, Altman proposes looking into "strange-sounding ideas like giving some 'compute budget' to enable everyone on Earth to use a lot of AI." Alternatively, the CEO suggests "just relentlessly driving the cost of intelligence as low as possible," which would also allegedly allow everybody to benefit equally from AI. Altman argued that "metrics we care about" including "health outcomes, economic prosperity" simply "get better on average and over the long-term." Yet "increasing equality does not seem technologically determined, and getting this right may require new ideas." "Anyone in 2035 should be able to marshall the intellectual capacity equivalent to everyone in 2025," he concluded. "Everyone should have access to unlimited genius to direct however they can imagine." But should we really take any of the billionaire's arguments at face value? For one, the basic concept of AGI remains a distant pipe dream. Researchers have yet to iron out some persistent issues with the tech, from rampant hallucinations to astronomical and environmentally damaging power requirements, DeepSeek's recent breakthrough notwithstanding. And at its core Altman's role as the head of the ChatGPT maker often makes it sound like he's selling dreams a bit more than a concrete reality. "The economic growth in front of us looks astonishing, and we can now imagine a world where we cure all diseases, have much more time to enjoy with our families, and can fully realize our creative potential," he wrote. But even Altman, who heads a company that's trying to raise yet another $40 billion, realizes this may not actually "benefit all," without at least some degree of intervention. Whether vague ideas like handing out a "compute budget" to everyone on the planet will prove helpful remains to be seen. We still don't even know if we'll ever reach the point of realizing an AGI that can turn society on its head. Is Altman putting the cart before the horse -- or is he sending a warning, predicting a chaotic breakdown in the "balance of power between capital and labor?"
[4]
OpenAI CEO Sam Altman admits that AI's benefits may not be widely distributed | TechCrunch
In a new essay on his personal blog, OpenAI CEO Sam Altman said the company is open to a "compute budget," among other "strange-sounding" ideas, to "enable everyone on Earth to use a lot of AI" and ensure the benefits of the technology are widely distributed. "The historical impact of technological progress suggests that most of the metrics we care about (health outcomes, economic prosperity, etc.) get better on average and over the long-term, but increasing equality does not seem technologically determined and getting this right may require new ideas," Altman wrote. " In particular, it does seem like the balance of power between capital and labor could easily get messed up, and this may require early intervention." Solutions to this problem, like Altman's "compute budget" concept, may be easier to propose than execute. Already, AI is impacting the labor market, resulting in job cuts and departmental downsizing. Experts have warned that mass unemployment is a possible outcome of the rise of AI tech if not accompanied by the right government policies and reskilling and upskilling programs. Not for the first time, Altman claims that artificial general intelligence (AGI) -- which he defines as "[an AI] system that can tackle increasingly complex problems, at human level, in many fields" -- is near. Whatever form it takes, this AGI won't be perfect, Altman warns, in the sense that it may "require lots of human supervision and direction" "[AGI systems] will not have the biggest new ideas," Altman wrote, "and it will be great at some things but surprisingly bad at others." But the real value from AGI will come from running these systems on a massive scale, Altman asserted. Similar to OpenAI rival Anthropic's CEO, Dario Amodei, Altman envisions thousands or even millions of hyper-capable AI systems tackling tasks "in every field of knowledge work." One might assume that'll be an expensive vision to realize. Indeed, Altman observed that "you can spend arbitrary amounts of money and get continuous and predictable gains" in AI performance. That's perhaps why OpenAI is reportedly in talks to raise up to $40 billion in a funding round, and has pledged to spend up to $500 billion with partners on an enormous data network. Yet Altman also makes the case that the cost to use "a given level of AI" falls about 10x every 12 months. In other words, pushing the boundary of AI technology won't get cheaper, but users will gain access to increasingly capable systems along the way. Capable, inexpensive AI models from Chinese AI startup DeepSeek and others seem to support that notion. There's evidence to suggest that training and development costs are coming down, as well, but both Altman and Amodei have argued that massive investments will be required to achieve AGI-level AI -- and beyond. As for how OpenAI plans to release AGI-level systems (assuming it can, in fact, create them), Altman said that the company will likely make "some major decisions and limitations related to AGI safety that will be unpopular." OpenAI once pledged that it would commit to stopping competing with and start assisting any "value-aligned," "safety-conscious" project that comes close to building AGI before it does, out of concern for safety. Of course, that was when OpenAI intended to remain a nonprofit. The company is in the process of converting its corporate structure into that of a more traditional, profit-driven org. OpenAI reportedly aims to reach $100 billion in revenue by 2029, equal to Target and Nestle's current annual sales. This being the case, Altman added that OpenAI's goal as it builds more powerful AI will be to "trend more towards individual empowerment" while forestalling "AI being used by authoritarian governments to control their population through mass surveillance and loss of autonomy." Altman recently said that he thinks OpenAI has been on the wrong side of history when it comes to open-sourcing its technologies. While OpenAI has open-sourced tech in the past, the company has generally favored a proprietary, closed-source development approach. "AI will seep into all areas of the economy and society; we will expect everything to be smart," Altman said. "Many of us expect to need to give people more control over the technology than we have historically, including open-sourcing more, and accept that there is a balance between safety and individual empowerment that will require trade-offs." Altman's blog post comes ahead of this week's AI Action Summit in Paris, which has already prompted other tech notables to outline their own visions for AI's future. In a footnote, Altman added that OpenAI does not, in fact, plan to end its relationship with close partner and investor Microsoft anytime soon by using the term AGI. Microsoft and OpenAI reportedly had a contractual definition of AGI -- AI systems that can generate $100 billion in profits -- that would, once met, allow OpenAI to negotiate more favorable investment terms. Altman said, however, that OpenAI "fully expect[s] to be partnered with Microsoft for the long term."
[5]
Sam Altman: AI Safety Decisions Will Be Unpopular -- But Mass Surveillance by 'Authoritarian Governments' the Real Threat
More capable AI could help enhance existing surveillance technologies by connecting divergent data flows. In a recent blog post , Sam Altman predicted that as AGI approaches, AI developers will have to make some unpopular decisions in the name of safety. Generally, the OpenAI CEO said he is in favor of letting individuals use AGI as they like. However, he acknowledged risks including the threat from "authoritarian governments" that will require some tradeoffs between individual empowerment and safety. AI Surveillance and Authoritarianism Altman's reference to AGI surveillance reflects growing concerns about the use of AI to monitor, track, and record individuals' behavior across multiple domains. The archetypal AI surveillance technology is facial recognition, which has unsurprisingly become a favorite tool of the kind of autocratic governments Altman alluded to in his blog post. For instance, having aggressively deployed AI-powered facial recognition to support its own surveillance state, China has now become a major exporter of the technology. Moreover, while Western firms developing the technology are restricted from selling it to the most authoritarian governments, there are few such limitations for the Chinese AI developers behind some of the most powerful facial recognition systems on the market. An analysis of Chinese technology exports in 2024 observed a strong "autocratic bias" in facial recognition. In other words, Chinese companies generate far more business from other autocratic regimes than they do from liberal democracies. AGI and Surveillance While facial recognition typically analyzes CCTV footage, other AI surveillance tools monitor social media, digital communications, internet traffic, and financial transactions. Connecting the dots between various online and offline activities has traditionally been a challenge for law enforcement and intelligence agencies tasked with monitoring and policing populations. But in the future, more capable AI systems, which Altman refers to as AGI, may be able to automate this process, creating powerful new surveillance tools with truly Orwellian potential. Competing Interests in AGI Development Altman clearly envisages OpenAI acting as a force for good in this story. In his blog post, his reference to unpopular "decisions and limitations related to AGI safety" suggests a desire to limit the technology's potential abuse, even if that means restricting it. Despite having a strong influence on the AGI narrative, partly due to Altman's own interest in the term, OpenAI is far from the only player shaping the direction of AI development. Post-DeepSeek, the company's once unquestionable AI leadership looks increasingly shaky. So, too, does Altman's belief that "you can spend arbitrary amounts of money and get continuous and predictable gains." After all, DeepSeek made a major AI breakthrough that few predicted by spending significantly less than some of its American peers. As the technology evolves, restraints will need to be negotiated by private corporations and national governments alike. That includes restraints on companies like OpenAI, whose surveillance machine operates on a different level to state-administered systems but remains formidable nonetheless.
[6]
Sam Altman Hints at More Open-Source AI as OpenAI Moves Toward AGI
"The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use." OpenAI is exploring the open-sourcing of AI as it moves toward Artificial General Intelligence (AGI), CEO Sam Altman said in a recent blog post. "AI will seep into all areas of the economy and society; we will expect everything to be smart. " Altman wrote. "Many of us expect to need to give people more control over the technology than we have historically, including open-sourcing more." Altman outlined three key economic trends in AI -- intelligence scaling predictably with resource investment, a rapid decline in AI costs, and super-exponential socioeconomic value from increasing intelligence. He noted that AI token costs have fallen 150 times between early 2023 and mid-2024, a pace exceeding Moore's Law. "The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use," he said. He warned that while AGI's benefits could be widely distributed, its impact will be uneven across industries. Some sectors may remain largely unchanged, while scientific progress is expected to accelerate significantly. "The balance of power between capital and labour could easily get messed up," Altman said, suggesting that intervention might be necessary. "OpenAI is considering strange-sounding ideas such as providing a compute budget to enable global AI access." AI agents, which Altman described as virtual co-workers, are being rolled out. He projected that software engineering agents could eventually handle tasks similar to junior developers at top firms, though requiring supervision. He also speculated on the impact of deploying such agents at scale across multiple industries. Altman acknowledged risks in AGI deployment and noted that public policy will play a crucial role in shaping its integration. He cautioned against authoritarian control of AI and stressed that individual empowerment should be prioritised. OpenAI continues to launch products "early and often" to allow society and technology to co-evolve, Altman wrote. He emphasised the long-term goal of making AGI a tool that enhances human capability, stating that by 2035, "anyone should be able to marshal the intellectual capacity equivalent to everyone in 2025."
Share
Share
Copy Link
OpenAI CEO Sam Altman discusses the company's approach to developing AGI, addressing concerns about inequality, surveillance, and the need for openness in AI development.
Sam Altman, CEO of OpenAI, has outlined the company's approach to developing Artificial General Intelligence (AGI) in a recent blog post, addressing concerns about equality, safety, and the societal impact of advanced AI systems 12. Altman defines AGI as "a system that can tackle increasingly complex problems, at human level, in many fields," and claims that such systems are on the horizon 1.
Altman acknowledges that technological progress historically improves various metrics like health outcomes and economic prosperity. However, he expresses concern about the potential for AI to exacerbate inequality, particularly in the balance of power between capital and labor 12. To address this, Altman proposes "strange-sounding ideas" such as providing everyone with a "compute budget" to ensure widespread access to AI benefits 13.
OpenAI's ambitious plans for AGI development come with significant financial implications. The company is reportedly in talks for a $40 billion funding round and has pledged, along with partners like Oracle, to invest up to $500 billion in data centers through Project Stargate 14. Despite these massive investments, Altman argues that the cost of using AI falls by about 10 times every 12 months, potentially making the technology more accessible over time 12.
As OpenAI works towards AGI, Altman emphasizes the need for "lots of human supervision and direction" to prevent disruptions like mass unemployment 1. He also warns about the potential misuse of AI by authoritarian governments for mass surveillance and population control 35. To mitigate these risks, OpenAI may need to make "major decisions and limitations related to AGI safety" that could prove unpopular 24.
Recognizing past criticism, Altman admits that OpenAI has been "on the wrong side of history" regarding open-source practices 12. The company now plans to trend towards individual empowerment and greater openness, including potentially open-sourcing more of its technology 4. This shift represents a balance between safety concerns and the desire to give people more control over AI technology 24.
The rise of companies like DeepSeek, a Chinese AI startup, suggests that the development of powerful AI systems may become more affordable and accessible 14. However, Altman maintains that massive investments will still be required to achieve AGI 2. As the AI landscape evolves, OpenAI faces competition and challenges to its leadership position, particularly after DeepSeek's recent breakthrough 5.
Altman envisions a future where millions of hyperscale AI systems can tackle problems "in every field of knowledge work" 1. He believes that by 2035, individuals should be able to access intellectual capacity equivalent to everyone in 2025 3. However, experts warn that without proper government policies and reskilling programs, the rise of AI could lead to mass unemployment and significant societal changes 4.
As AI continues to advance, the need for international cooperation and regulation becomes increasingly apparent. The upcoming AI Action Summit in Paris highlights the global focus on shaping AI's future and addressing its potential impacts on society, economy, and individual freedoms 45.
Reference
[4]
OpenAI CEO Sam Altman discusses the future of AI, predicting AGI emergence by 2025 and warning about the eventual impact of superintelligence on jobs and the economy.
4 Sources
4 Sources
OpenAI CEO Sam Altman announces a significant milestone in artificial general intelligence (AGI) development, discusses the company's future plans, and opens up about his brief dismissal in 2023.
7 Sources
7 Sources
OpenAI CEO Sam Altman outlines four crucial steps for the United States to maintain its lead in artificial intelligence development, emphasizing the need for strategic action to prevent China from dominating the field.
2 Sources
2 Sources
OpenAI CEO Sam Altman's recent blog post suggests superintelligent AI could emerge within 'a few thousand days,' stirring discussions about AI's rapid advancement and potential impacts on society.
12 Sources
12 Sources
OpenAI CEO Sam Altman's recent statements about achieving AGI and aiming for superintelligence have ignited discussions about AI progress, timelines, and implications for the workforce and society.
20 Sources
20 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved