2 Sources
[1]
'We are past the event horizon': Sam Altman thinks superintelligence is within our grasp and makes 3 bold predictions for the future of AI and robotics
In a long blog post, OpenAI CEO Sam Altman has set out his vision of the future and reveals how artificial general intelligence (AGI) is now inevitable and about to change the world. In what could be viewed as an attempt to explain why we haven't achieved AGI quite yet, Altman seems at pains to stress that the progress of AI as a gentle curve rather than a rapid acceleration, but that we are now "past the event horizon" and that "when we look back in a few decades, the gradual changes will have amounted to something big." "From a relativistic perspective, the singularity happens bit by bit", writes Altman, "and the merge happens slowly. We are climbing the long arc of exponential technological progress; it always looks vertical looking forward and flat going backwards, but it's one smooth curve." But even with a more decelerated timeline, Altman is confident that we're on our way to AGI, and predicts three ways it will shape the future: Of particular interest to Altman is the role that robotics are going to play in the future: "2025 has seen the arrival of agents that can do real cognitive work; writing computer code will never be the same. 2026 will likely see the arrival of systems that can figure out novel insights. 2027 may see the arrival of robots that can do tasks in the real world." To do real tasks in the world, as Altman imagines, the robots would need to be humanoid, since our world is designed to be used by humans, after all. Altman says "...robots that can build other robots ... aren't that far off. If we have to make the first million humanoid robots the old-fashioned way, but then they can operate the entire supply chain - digging and refining minerals, driving trucks, running factories, etc - to build more robots, which can build more chip fabrication facilities, data centers, etc, then the rate of progress will obviously be quite different." Altman says society will have to change to adapt to AI, on the one hand through job losses, but also through increased opportunities: "The rate of technological progress will keep accelerating, and it will continue to be the case that people are capable of adapting to almost anything. There will be very hard parts like whole classes of jobs going away, but on the other hand the world will be getting so much richer so quickly that we'll be able to seriously entertain new policy ideas we never could before." Altman seems to balance the changing job landscape with the new opportunities that superintelligence will bring: "...maybe we will go from solving high-energy physics one year to beginning space colonization the next year; or from a major materials science breakthrough one year to true high-bandwidth brain-computer interfaces the next year." In Altman's bold new future, superintelligence will be cheap and widely available. When describing the best path forward, Altman first suggests we solve the "alignment problem", which involves getting "...AI systems to learn and act towards what we collectively really want over the long-term". "Then [we need to] focus on making superintelligence cheap, widely available, and not too concentrated with any person, company, or country ... Giving users a lot of freedom, within broad bounds society has to decide on, seems very important. The sooner the world can start a conversation about what these broad bounds are and how we define collective alignment, the better." Reading Altman's blog, there's a kind of inevitability behind his prediction that humanity is marching uninterrupted towards AGI. It's like he's seen the future, and there's no room for doubt in his vision, but is he right? Altman's vision stands in stark contrast to the recent paper from Apple that suggested we are a lot farther away from achieving AGI than many AI advocates would like. "The illusion of thinking", a new research paper from Apple, states that "despite their sophisticated self-reflection mechanisms learned through reinforcement learning, these models fail to develop generalizable problem-solving capabilities for planning tasks, with performance collapsing to zero beyond a certain complexity threshold." The research was conducted on Large Reasoning Models, like OpenAI's o1/o3 models and Claude 3.7 Sonnet Thinking. "Particularly concerning is the counterintuitive reduction in reasoning effort as problems approach critical complexity, suggesting an inherent compute scaling limit in LRMs. ", the paper says. In contrast, Altman is convinced that "Intelligence too cheap to meter is well within grasp. This may sound crazy to say, but if we told you back in 2020 we were going to be where we are today, it probably sounded more crazy than our current predictions about 2030." As with all predictions about the future, we'll find out if Altman is right soon enough.
[2]
Altman Expects 2030s to be 'Wildly Different' From Any Other Decade | AIM
"The ability for one person to get much more done in 2030 than they could in 2020 will be a striking change." OpenAI CEO Sam Altman believes the world is on the brink of a transformation driven by artificial intelligence and automation, marking 2025 to 2027 as milestone years. In a blog post titled The Gentle Singularity, Altman wrote, "2025 has seen the arrival of agents that can do real cognitive work. Writing computer code will never be the same." He added that 2026 may bring systems that can "figure out novel insights," and by 2027, robots could be performing tasks in the physical world. He predicted that more people would be able to create software and art using these tools, but said, "Experts will probably still be much better than novices, as long as they embrace the new tools." He pointed to a significant shift in productivity: "The ability for one person to get much more done in 2030 than they could in 2020 will be a striking change." Looking further ahead, Altman suggested that while certain aspects of life will remain the same, such as family, creativity, and leisure, others will not. "In still-very-important-ways, the 2030s are likely going to be wildly different from any time that has come before," he wrote. He emphasised the transformative potential of two forces: intelligence and energy. "These two have been the fundamental limiters on human progress for a long time; with abundant intelligence and energy (and good governance), we can theoretically have anything else." The blog comes as OpenAI continues to push ahead with advancements in AI agents and cognitive systems, with major implications for software development, research, and industry. The company recently launched the o3-pro model to all ChatGPT Pro and Team users. The new model replaces the previous o1-pro and is now available through the model picker in ChatGPT. Enterprise and Edu users will receive access next week.
Share
Copy Link
OpenAI CEO Sam Altman shares his vision for the future of AI, predicting the arrival of superintelligence and advanced robotics by 2030, while emphasizing the gradual nature of technological progress and its potential impact on society.
OpenAI CEO Sam Altman has shared his ambitious vision for the future of artificial intelligence (AI) in a recent blog post titled "The Gentle Singularity." Altman believes that we are on the cusp of a transformative era, with artificial general intelligence (AGI) becoming an inevitability that will reshape our world 1.
Source: TechRadar
Altman outlines a timeline for significant AI developments:
He emphasizes that while these changes may seem gradual, they will amount to substantial progress when viewed retrospectively.
Altman places particular emphasis on the future of robotics:
The CEO acknowledges that society will need to adapt to these AI advancements:
Altman proposes key steps for the responsible development of AI:
While Altman's vision is optimistic, it's worth noting that not all experts share his outlook:
Source: Analytics India Magazine
Altman predicts that the 2030s will be "wildly different from any time that has come before." He highlights:
As OpenAI continues to advance its AI technologies, including the recent launch of the o3-pro model, the tech world watches with anticipation to see if Altman's bold predictions will come to fruition.
ChatGPT and other AI chatbots are encouraging harmful delusions and conspiracy theories, leading to mental health crises, dangerous behavior, and even death in some cases. Experts warn of the risks of using AI as a substitute for mental health care.
5 Sources
Technology
21 hrs ago
5 Sources
Technology
21 hrs ago
A major Google Cloud Platform outage caused widespread disruptions to AI services and internet platforms, highlighting the vulnerabilities of cloud-dependent systems and raising concerns about the centralization of digital infrastructure.
4 Sources
Technology
21 hrs ago
4 Sources
Technology
21 hrs ago
Google is experimenting with AI-generated audio summaries of search results, bringing its popular Audio Overviews feature from NotebookLM to Google Search as part of a limited test.
8 Sources
Technology
13 hrs ago
8 Sources
Technology
13 hrs ago
The article discusses the surge in mergers and acquisitions in the data infrastructure sector, driven by the AI race. Legacy tech companies are acquiring data processing firms to stay competitive in the AI market.
3 Sources
Business and Economy
5 hrs ago
3 Sources
Business and Economy
5 hrs ago
Morgan Stanley's research highlights China's leading position in the global race for advanced robotics and AI, citing ten key factors that give the country a strategic edge over the US.
2 Sources
Technology
21 hrs ago
2 Sources
Technology
21 hrs ago