7 Sources
[1]
Sam Altman says the Singularity is imminent - here's why
In a new blog post, Altman laid out his vision for a hugely prosperous future powered by superintelligent AI. We'll figure things out as we go along, he argues. In his 2005 book "The Singularity is Near," the futurist Ray Kurzweil predicted that the Singularity -- the moment in which machine intelligence surpasses our own -- would occur around the year 2045. Sam Altman believes it's much closer. In a blog post published Tuesday, the OpenAI CEO delivered a homily devoted to what he views as the imminent arrival of artificial "superintelligence." Whereas artificial general intelligence, or AGI, is usually defined as a computer system able to match or outperform humans on any cognitive task, a superintelligent AI would go much further, overshadowing our own intelligence to such a vast degree that we'd be helpless to fathom it, like snails trying to understand general relativity. (Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Crucially, the blog post frames this supposedly inevitable arrival of superintelligent AI as one that will happen gradually enough for society to prepare itself. By comparison, he looks back at the past five years, a relatively short period of time in which most people have gone from knowing nothing about AI to using powerful tools like ChatGPT on a daily basis, to the point where generative AI has become almost mundane. Also: Forget AGI - Meta is going after 'superintelligence' now "Humanity is close to building digital superintelligence, and at least so far it's much less weird than it seems like it should be," Altman wrote. His writing veers at times into quasi-religious territory, portraying the arrival of superintelligent AI in terms reminiscent of early Christian prophets describing the Second Coming. While his language is cloaked in scientific, secular language, it's hard not to detect just a hint of proselytizing: "the 2030s are likely going to be wildly different from any time that has come before," he writes. "We do not know how far beyond human-level intelligence we can go, but we are about to find out." AGI -- and by extension, artificial superintelligence -- has been a divisive subject in the tech world. Like Altman, many believe its arrival is not a matter of if, but when. Meta is reportedly preparing to launch an internal research lab devoted to building superintelligence. Others doubt that it's even technically possible to build a machine that's more advanced than the human brain. OpenAI catapulted to global fame following its release of ChatGPT in late 2022. The company, since then, has been shipping new AI products at a breakneck pace, prompting a steady trickle of employees to depart, citing concerns that safety was being deprioritized in the name of speed. Many have joined Anthropic, an AI company that was actually founded by former OpenAI employees, or have gone on to launch their own ventures. Ilya Sutskever, for example -- a cofounder of OpenAI and its former chief scientist -- founded a company called Safe Superintelligence (SSI) last June. AI developers in general have also been widely criticized for their rush to automate human labor without offering any kind of concrete policy proposals for what millions of job-displaced people in the future ought to do with themselves. Altman's new blog post echoes a refrain that's become common among tech leaders on this front: Yes, there will be some job losses, but ultimately the technology will create entirely new categories of jobs to replace those that have been automated; and besides, AI is going to generate so much wealth for humanity at large that people will have the freedom to pursue more meaningful things than work. (Just exactly what those things are is never made quite clear.) Altman has also supported the idea of implementing a universal basic income to support the masses as the world adjusts to his vision of a techno-utopia. Also: What AI pioneer Yoshua Bengio is doing next to make AI safer "The rate of technological progress will keep accelerating, and it will continue to be the case that people are capable of adapting to almost anything," he wrote in the blog post. "There will be very hard parts like whole classes of jobs going away, but on the other hand the world will be getting so much richer so quickly that we'll be able to seriously entertain new policy ideas we never could before." OpenAI has evolved quickly and dramatically since it was founded a decade ago. It began as a nonprofit, aimed -- as its name suggests -- at open-source AI research. It's since become a profit-raising behemoth competing with the likes of Google and Meta. The company's mission statement has long been "to ensure that artificial general intelligence benefits all of humanity." Now, judging from Altman's blog post, OpenAI seems to be aiming for an even loftier goal: "before anything else, we are a superintelligence research company."
[2]
Sam Altman's outrageous 'Singularity' blog perfectly sums up AI in 2025
Sam Altman has been a blogger far longer than he's been in the AI business. Now the CEO of OpenAI, Altman began his blog -- titled simply, if concerningly, "Sam Altman" -- in 2013. He was in year 3 of working at the startup accelerator YCombinator at the time, and would soon be promoted to president. The first page of posts contains no references to AI. Instead we get musings on B2B startup tools, basic dinner party conversation openers, and UFOs (Altman was a skeptic). Then there was this sudden insight: "The most successful founders do not set out to create companies," Altman wrote. "They are on a mission to create something closer to a religion." Fast-forward to Altman's latest 2025 blog post, "The Gentle Singularity" -- and, well, it's hard not to say mission accomplished. "We are past the event horizon; the takeoff has started," is how Altman opens, and the tone only gets more messianic from there. "Humanity is close to building digital superintelligence." Can I get a hallelujah? To be clear, the science does not suggest humanity is close to building digital superintelligence, a.k.a. Artificial General Intelligence. The evidence says we have built models that can be very useful in crunching giant amounts of information in some ways, wildly wrong in others. AI hallucinations appear to be baked into the models, increasingly so with AI chatbots, and they're doing damage in the real world. There are no advances in reasoning, as was made plain in a paper also published this week: AI models sometimes don't see the answer when you tell them the answer. Don't tell that to Altman. He's off on a trip to the future to rival that of Ray Kurzweil, the offbeat Silicon Valley guru who first proposed we're accelerating to a technological singularity. Kurzweil set his all-change event many decades down the line. Altman is willing to risk looking wrong as soon as next year: "2026 will likely see the arrival of systems that can figure out novel insights. 2027 may see the arrival of robots that can do tasks in the real world ... It's hard to even imagine what we will have discovered by 2035; maybe we will go from solving high-energy physics one year to beginning space colonization the next year." The "likely", "may," and "maybe" there are doing a lot of lifting. Altman may have "something closer to religion" in his AGI assumptions, but cannot cast reason aside completely. Indeed, shorn of the excitable sci-fi language, he's not always claiming that much (don't we already have "robots that can do tasks in the real world"?). As for his most outlandish claims, Altman has learned to preface them with a word salad that could mean anything. Take this doozy: "In some big sense, ChatGPT is already more powerful than any human who has ever lived." Can I get a citation needed? Altman's latest blog isn't all future-focused speculation. Buried within is the OpenAI CEO's first ever statement on ChatGPT's energy and water usage -- and as with his needless drama over a Scarlett Johansson-like voice , opening that Pandora's Box may not go the way Altman thinks. Since ChatGPT exploded in popularity in 2023, OpenAI -- along with main AI rivals Google and Microsoft -- have stonewalled researchers looking for details on their data center usage. "We don't even know how big models like GPT are," Sasha Luccioni, climate lead at open-source AI platform HuggingFace, told me last year. "Nothing is divulged, everything is a company secret." Altman finally divulged, kinda. In the middle of a blog post, in parentheses, with the preface "people are often curious about how much energy a ChatGPT query uses," the OpenAI CEO offers two stats: "the average query uses about 0.34 watt-hours ... and about 0.000085 gallons of water." There's no more data offered to confirm these stats; Altman doesn't even specify which model of ChatGPT. OpenAI hasn't responded to multiple follow-up requests from multiple news outlets. Altman has an obvious interest in downplaying the amount of energy and water OpenAI requires, and he's already doing it here with a little sleight-of-hand. It isn't the average query that concerns researchers like Luccioni; it's the enormous amount of energy and water required to train the models in the first place. But now he's shown himself to be responsive to the "often curious," Altman has less of a reason to stonewall. Why not release all the data so others can replicate his numbers, you know, like scientists do? Meanwhile, battles over data center energy and water usage are brewing across the US. Luccioni has started an AI Energy Leaderboard that shows how wildly open source AI models vary. This is serious stuff, because companies don't like to spend more on energy usage than they need to, and because there's buy-in. Meta and (to a lesser extent) Microsoft and Google are already on the board. Can OpenAI afford not to be? In the end, the answer depends on whether Altman is building a company or more of a religion.
[3]
Sam Altman wants "a significant fraction of the power on Earth" to run AI, and I've unlocked a new Doomsday scenario.
Altman has a dream for the future that is the stuff of my personal nightmares. AMD held its Advancing AI 2025 summit in San Jose, California, on Thursday, headlined by CEO Lisa Su's lengthy keynote address. However, it was not what Su said that alarmed me. OpenAI CEO Sam Altman joined Su on stage to wrap up the two-hour keynote, and their back-and-forth included one short exchange toward the end that now lives rent-free in my brain. Referencing the recent ChatGPT outages, Su asked Altman, "Are there ever enough GPUs?" Altman, whose OpenAI is an AMD customer, replied, "Theoretically, at some points, you can see that a significant fraction of the power on Earth should be spent running AI compute. And maybe we're going to get there." Which sounds great for the AI industry. But what about the rest of us? Okay, look. I'm not the biggest fan of AI usage. One of the few types of AI that I find myself interested in is Intel's on-device custom RAG AI Assistant Builder. It's a small language model that runs locally and is mostly useful for the kind of busywork no one has time for. Cloud-based AI data centers fuel my nightmares. The current pace of Artificial Intelligence growth is unsustainable. While Altman claims that users shouldn't be worried about ChatGPT's energy cost, adding AI usage on top of other sources of environmental pollution puts more pressure on a planet that was in dire straits before OpenAI programmed its first chatbot. AI is not just an industry built on destroying the foundations of its own existence, though that is certainly an economic crisis in the making. It's also incredibly damaging to the environment and to the education of future generations. AI-generated deepfakes are getting so good that most people can't tell real footage from AI video, leading us into a "future of AI control." Without guardrails, unchecked artificial intelligence growth is very much an ouroboros. AI has already negatively impacted most industries. Even if you accept that AI will eventually replace human workers in most roles, then who will buy the products? AI won't be selling laptops, phones, or software to itself. Throw in the ecological and educational damage AI systems can cause, and it's a bleak future we're looking at. Thankfully, large language model (LLM) AI systems like ChatGPT are becoming increasingly efficient over time. So they might hit critical mass soon, and we won't need to reopen Three Mile Island to run virtual assistants. There are also plenty of companies looking into sustainability solutions for AI systems. But is it worth it? We still haven't gotten the "better Siri" we were promised. Agentic AI just sounds like a new way to commit to total social isolation. There may never be a "killer app" for AI.
[4]
OpenAI CEO Says We've Already Passed the "Superintelligence Event Horizon" - Decrypt
ChatGPT now has 800 million weekly users, who Altman said rely on the technology. Humanity may already be entering the early stages of the singularity, the point at which AI surpasses human intelligence, according to OpenAI CEO Sam Altman. In a blog post published Tuesday, Altman said humanity has crossed a critical inflection point -- an "event horizon" -- marking the beginning of a new era of digital superintelligence. "We are past the event horizon; the takeoff has started," he wrote. "Humanity is close to building digital superintelligence, and at least so far it's much less weird than it seems like it should be." Altman's analysis comes at a time when leading AI developers warn that artificial general intelligence could soon displace workers and disrupt global economies, outpacing the ability of governments and institutions to respond. The singularity is a theoretical point when artificial intelligence surpasses human intelligence, leading to rapid, unpredictable technological growth and potentially profound changes in society. An event horizon is a point of no return, beyond which the course of the object, in this case, an AI, cannot be changed. Altman argued that we're already entering a "gentle singularity" -- a gradual, manageable transition toward powerful digital superintelligence, not a sudden wrenching change. The takeoff has begun, but remains comprehensible and beneficial. As evidence of that, Altman pointed to the surge in ChatGPT's popularity since its public launch in 2022: "Hundreds of millions of people rely on it every day and for increasingly important tasks," he said. The numbers back him up. In May 2025, ChatGPT reportedly had 800 million weekly active users. Despite ongoing legal battles with authors and media outlets, as well as calls for pauses on AI development, OpenAI shows no signs of slowing down. Altman emphasized that even slight improvements in the technology could deliver substantial benefits. But a small misalignment, scaled across hundreds of millions of users, could have serious consequences. To solve for these misalignments, he suggested several points, including: Altman said the next five years are critical for AI development. "2025 has seen the arrival of agents that can do real cognitive work; writing computer code will never be the same," he said. "2026 will likely see the arrival of systems that can figure out novel insights. 2027 may see the arrival of robots that can do tasks in the real world." By 2030, Altman predicted, both intelligence and the capacity to generate and act on ideas will be widely available. "Already, we live with incredible digital intelligence, and after some initial shock, most of us are pretty used to it," he said, pointing out how quickly people shift from being impressed by AI to expecting it. As the world anticipates the rise of artificial general intelligence and the singularity, Altman believes the most astonishing breakthroughs won't feel like revolutions -- they'll feel ordinary, and the bare minimum AI players need to offer to enter the market. "This is how the singularity goes: wonders become routine, and then table stakes," he said.
[5]
'ChatGPT Is Already More Powerful Than Any Human,' OpenAI CEO Sam Altman Says
OpenAI backer Microsoft and its rivals are investing billions of dollars into AI and jockeying for users in what is becoming a more crowded landscape. Humanity could be close to successfully building an artificial super intelligence, according to Sam Altman, the CEO of ChatGPT maker OpenAI and one of the faces of the AI boom. "Robots are not yet walking the streets," Altman wrote in a blog post late Wednesday, but said "in some big sense, ChatGPT is already more powerful than any human who has ever lived." Hundreds of millions of people use AI chatbots every day, Altman said. And companies are investing billions of dollars in AI and jockeying for users in what is quickly becoming a more crowded landscape. OpenAI, backed by Microsoft (MSFT), wants to build "a new generation of AI-powered computers," and last month announced a $6.5 billion acquisition deal with that goal in mind. Meanwhile, Google parent Alphabet (GOOGL), Apple (AAPL), Meta (META), and others are rolling out new tools that integrate AI more deeply into their users' daily lives. "The 2030s are likely going to be wildly different from any time that has come before," Altman said. "We do not know how far beyond human-level intelligence we can go, but we are about to find out." Eventually, there could be robots capable of building other robots designed for tasks in the physical world, Altman suggested. In his blog post, Altman said he expects there could be "whole classes of jobs going away" as the technology develops, but that he believes "people are capable of adapting to almost anything" and that the rapid pace of technological progress could lead to policy changes. But ultimately, "in the most important ways, the 2030s may not be wildly different," Altman said, adding "people will still love their families, express their creativity, play games, and swim in lakes."
[6]
Sam Altman Says Humans are Already Past "the A.I. Event Horizon"
Sam Altman explains how A.I. will revolutionize industries and challenge societal structures in the coming decades. OpenAI CEO Sam Altman believes we are already past "the A.I. event horizon," he said in a new blog post yesterday (June 11), arguing that A.I. development is quietly reshaping civilization -- even if the shift feels subtle. "The takeoff has started. Humanity is close to building digital superintelligence, and at least so far, it's much less strange than it seems it should be," he wrote. Sign Up For Our Daily Newsletter Sign Up Thank you for signing up! By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime. See all of our newsletters According to the OpenAI CEO, 2025 marks a pivotal shift in A.I. capabilities, particularly in coding and complex reasoning. By next year, he expects A.I. systems to begin generating original scientific ideas, with autonomous robots functioning effectively in the physical world by 2027. "In the 2030s, intelligence and energy are going to become wildly abundant. These two have long been the fundamental limiters on human progress," he wrote. "With abundant intelligence and energy (and good governance), we can theoretically have anything else." One key driver of this shift is A.I. infrastructure, such as computing power, servers and data center storage. As it becomes more automated and easier to deploy, the cost of intelligence could soon be as low as electricity. And it will supercharge scientific discovery, enable infrastructure to build itself, and unlock new frontiers in health care, materials science and space exploration. "If we can do a decade's worth of research in a year, or a month, then the rate of progress will obviously be quite different," Altman wrote. Altman also addressed a common question: how much energy does a ChatGPT query use? He revealed that a typical query consumes just 0.34 watt-hours of energy and 0.000085 gallons of water -- roughly the same amount of power an oven uses in a second and as little water as one-fifteenth of a teaspoon. While some fear that A.I. could render human labor obsolete, Altman believes that by 2030, A.I. will amplify human creativity and productivity, not replace it. "In some big sense, ChatGPT is already more powerful than any human who has ever lived. A small new capability can create a hugely positive impact," he wrote. However, Altman also acknowledged the dangers. He noted that alignment -- the challenge of ensuring A.I. systems understand and follow long-term human values -- is still unsolved. He cited social media algorithms as an example of poorly aligned A.I. systems -- tools optimized for engagement that often result in harmful societal outcomes. The real threat is not that A.I. will replace human purpose, but that society might fail to evolve the systems and policies necessary for people to thrive alongside increasingly intelligent machines. He urged global leaders to begin a serious conversation about the values and boundaries that should guide A.I. development before the technology becomes too deeply entrenched to redirect. "The sooner the world can start a conversation about what these broad bounds are and how we define collective alignment, the better," he wrote.
[7]
Sam Altman Reveals ChatGPT's Energy Bill and the Road to Superintelligence
OpenAI CEO Sam Altman recently penned a blog post titled 'The Gentle Singularity' and revealed how much energy ChatGPT uses for each query. Altman wrote that, on average, a ChatGPT query uses about 0.34 watt-hours, closer to what "an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes." Altman further noted that a ChatGPT query also uses "about 0.000085 gallons of water; roughly one fifteenth of a teaspoon." Altman went on to say that "the cost of intelligence should eventually converge to near the cost of electricity." But besides ChatGPT's energy consumption, what caught my attention was Altman's opening paragraph. He declares in a rather dramatic way that we are accelerating towards superintelligence. We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence, and at least so far it's much less weird than it seems like it should be. It appears Altman is apprising the public that we have crossed the threshold and are moving towards transformative AI. The next paragraph gets even more interesting where he writes, "The least-likely part of the work is behind us; the scientific insights that got us to systems like GPT-4 and o3 were hard-won, but will take us very far." This suggests that current AI technologies are sufficient to take us to AGI (Artificial General Intelligence) and eventually, ASI (Artificial Superintelligence). This proposition is in direct conflict with prominent AI skeptics, including Meta AI's chief scientist, Yann LeCun, who say LLMs have hit a wall and are not capable of leading us to AGI or ASI. Altman further lays out a timeline of what's coming: Altman touched on "recursive self-improvement," an AI system that can autonomously improve itself. He writes that current AI systems are not completely autonomous, but "this is a larval version of recursive self-improvement." Current AI systems are beginning to improve the process of building better systems, which suggests that future AI may help build more advanced AI. OpenAI is already hearing from scientists that they have become two or three times more productive with the help of current AI systems. Having said that, Altman acknowledges that job displacement will lead to serious societal disruption. He writes, "There will be very hard parts like whole classes of jobs going away, but on the other hand the world will be getting so much richer so quickly that we'll be able to seriously entertain new policy ideas we never could before." In this regard, OpenAI's former chief scientist and SSI chief, Ilya Sutskever, said the following at the University of Toronto: "Slowly but surely, or maybe not so slowly, AI will keep getting better. And the day will come when AI will do all of our... all the things that we can do, not just some of them, but all of them. Anything which I can learn, anything which any one of you can learn, the AI could do as well. How do we know this, by the way? How can I be so sure? How can I be so sure of that? The reason is that all of us have a brain, and the brain is a biological computer. That's why we have a brain. The brain is a biological computer. So, why can't the digital computer, a digital brain, do the same things? This is the one sentence summary for why AI will be able to do all those things: because we have a brain and the brain is a biological computer. And so you can start asking yourselves, what's going to happen? What's going to happen when computers can do all of our jobs? Right? Those are really big questions." From AI researchers to industry leaders, many are claiming that AI is a transformative technology and will lead to a world of abundance. However, before that future arrives, the world will likely see societal disruptions. How much of this bold vision will become reality is something remains to be seen.
Share
Copy Link
Sam Altman, CEO of OpenAI, claims that humanity is on the brink of developing superintelligent AI, potentially reshaping society and the global economy in the coming decades.
OpenAI CEO Sam Altman has made a bold prediction about the future of artificial intelligence, claiming that humanity is on the cusp of developing superintelligent AI. In a recent blog post, Altman stated, "We are past the event horizon; the takeoff has started," suggesting that we have entered a new era of digital superintelligence 1.
Source: Decrypt
Altman describes this transition as a "gentle singularity," a gradual and manageable progression towards powerful digital superintelligence. He argues that the process is already underway, citing the rapid adoption of AI tools like ChatGPT, which reportedly has 800 million weekly active users 4.
The OpenAI CEO outlines an ambitious timeline for AI development:
Altman believes that by 2035, we may progress from solving high-energy physics to beginning space colonization within a single year 1.
The rapid advancement of AI technology is expected to have far-reaching consequences for society and the global economy. Altman acknowledges that "whole classes of jobs" may disappear but remains optimistic about humanity's ability to adapt 5.
He suggests that the increasing wealth generated by AI advancements could lead to new policy ideas, potentially including universal basic income 1.
Source: LaptopMag
Altman's vision for AI's future has raised concerns about energy consumption. In a statement that has alarmed some observers, he suggested that "a significant fraction of the power on Earth should be spent running AI compute" 3.
This comment has sparked debates about the environmental impact of AI development and the sustainability of such rapid technological growth.
Despite Altman's enthusiasm, his predictions have been met with skepticism from some quarters. Critics argue that current AI models still struggle with reasoning and often produce hallucinations or incorrect information 2.
Additionally, concerns have been raised about the lack of transparency regarding the energy and water usage of AI models like ChatGPT 2.
Altman's predictions come amid an intensifying competition in the AI industry. Major tech companies like Microsoft, Google, Apple, and Meta are investing billions of dollars in AI development and are rapidly integrating AI capabilities into their products and services 5.
Source: Mashable
As the AI landscape becomes increasingly crowded, companies are vying for users and market share, driving further innovation and investment in the field.
While Altman's vision of an AI-driven future is both exciting and concerning, it remains to be seen how accurately his predictions will play out. As AI technology continues to advance at a rapid pace, it is clear that society will need to grapple with the profound implications of these developments in the coming years.
Summarized by
Navi
[3]
AMD reveals its new Instinct MI350 and MI400 series AI chips, along with a comprehensive AI roadmap spanning GPUs, networking, software, and rack architectures, in a bid to compete with Nvidia in the rapidly growing AI chip market.
18 Sources
Technology
21 hrs ago
18 Sources
Technology
21 hrs ago
Google DeepMind has launched Weather Lab, an interactive website featuring AI weather models, including an experimental tropical cyclone model. The new AI system aims to improve cyclone predictions and is being evaluated by the US National Hurricane Center.
8 Sources
Technology
21 hrs ago
8 Sources
Technology
21 hrs ago
Meta's new AI app is facing criticism for its "Discover" feature, which publicly displays users' private conversations with the AI chatbot, often containing sensitive personal information.
6 Sources
Technology
21 hrs ago
6 Sources
Technology
21 hrs ago
A major Google Cloud Platform outage affected numerous AI services and popular platforms, highlighting the vulnerabilities of cloud-dependent systems and raising concerns about the resilience of digital infrastructure.
3 Sources
Technology
5 hrs ago
3 Sources
Technology
5 hrs ago
Harvard University and other libraries are releasing vast collections of public domain books and documents to AI researchers, providing a rich source of cultural and historical data for machine learning models.
6 Sources
Technology
21 hrs ago
6 Sources
Technology
21 hrs ago