3 Sources
3 Sources
[1]
OpenAI says it's working toward catastrophe or utopia - just not sure which
Experts warn against rapidly developing superintelligence. OpenAI is warning about the dangers of runaway AI systems, even while it competes with other major tech developers to build "superintelligence" -- an as-yet theoretical machine intelligence that outperforms the capabilities of the human brain. In a blog post titled "AI Progress and Recommendations," published Thursday, the company outlined its vision for the broad-scale social benefit that such an advanced AI could confer upon humanity, the risks that could be encountered along the way, and some suggestions for mitigating them. Here's what it all means. Also: OpenAI and Microsoft finally have a new deal - and it's all about AGI (Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) According to OpenAI, the development of superintelligent AI could democratize human well-being. "We expect the future to provide new and hopefully better ways to live a fulfilling life, and for more people to experience such a life than do today," the company wrote, before going on to say that the world would likely need to crack a few eggs in order to make an AI omelette. "It is true that work will be different, the economic transition may be very difficult in some ways, and it is even possible that the fundamental socioeconomic contract will have to change. But in a world of widely-distributed abundance, people's lives can be much better than they are today." The message echoed a recent personal blog post from the company's CEO Sam Altman, which portrayed superintelligent AI as an inevitability that would admittedly cause some major social disruptions (eliminating some categories of jobs, for example), but that would nevertheless turn out in the long run to be a major historical boon for humanity. Also: A new Chinese AI model claims to outperform GPT-5 and Sonnet 4.5 - and it's free Thursday's blog made some vague predictions about how AI could lead to such "widely-distributed abundance" -- helping to speed up novel scientific discovery, for example (an effort that the company has already begun working on). The company added that "AI systems will help people understand their health, accelerate progress in fields like materials science, drug development, and climate modeling, and expand access to personalized education for students around the world." "Superintelligence" is the latest and greatest marketing buzzword in Silicon Valley, for better or worse. Most famously, Meta announced in June that it had launched its own internal R&D division devoted to building superintelligence. Microsoft, similarly, has formed a "Superintelligence Team," which, according to a recent X post from that company's AI lead Mustafa Suleyman about the company's "Humanist" approach, is geared toward building "incredibly advanced AI capabilities that always work for, in service of, people and humanity." Ironically, the term "superintelligence" was popularized by a 2014 book of the same name that largely served as a warning about the dangers of runaway, self-improving AI. Also: Microsoft researchers tried to manipulate AI agents - and only one resisted all attempts Last month, a statement published by the nonprofit Future of Life Institute and signed by tech luminaries including Geoffrey Hinton and Steve Wozniak warned that superintelligent AI could escape human control and pose an existential threat to civilization. The statement advised that all industry efforts to build superintelligent AI should therefore be paused until labs can chart a safe pathway forward. Even with today's AI tools, many experts worry about what's known in the field as the alignment problem: the challenge to ensure these black box systems don't contradict human interests. Broadly speaking, the major fear around superintelligence is that it would be so much more advanced than human intelligence, and so much more inscrutable than the systems we're interacting with today, that it could manipulate or mislead us in dangerously subtle ways. Others dismiss these fears as AI "doomerism," and insist that even if superintelligent AI were to somehow go off the rails, we could always just turn it off. Boom, problem solved. OpenAI seemed to be nodding toward the Future of Life Institute statement about the risks of superintelligent AI when it wrote in its Thursday blog post that the technology was "potentially catastrophic," and that one potential solution was for the industry to "slow development to more carefully study these systems as we get closer to systems capable of recursive self-improvement." Also: AI models know when they're being tested - and change their behavior, research shows It also made the case that the industry should be allowed to collaborate closely with federal lawmakers to create the AI equivalent of building codes and fire standards, thus ensuring general and standardized compliance with AI safety and oversight protocol. Of course, this could be another tactic from OpenAI to boost its own influence over the shaping of federal AI policy at a time when states like California and Colorado have started regulating the technology more explicitly. Suggesting the industry "slow development" just weeks after confirming its company restructuring and agreement with Microsoft around the goal of achieving AGI -- a stop along the way to, or often conflated with, superintelligence -- appears somewhat contradictory. Also: Is AI a career killer? Not if you have these skills, McKinsey research shows The company made its thoughts on a state-by-state approach to AI regulation clear in the blog: "Most developers and open-source models, and almost all deployments of today's technology, should have minimal additional regulatory burdens relative to what already exists," the company wrote. "It certainly should not have to face a 50-state patchwork." In the past, OpenAI has not necessarily endorsed regulation the way Anthropic has, but has expressed a preference for federal regulation over state-by-state legislation. Despite its insistence on the importance of safety and comprehensive governance frameworks, OpenAI has garnered a reputation within the AI industry as a speedy and often reckless company. Many of its early employees -- including siblings Dario and Daniela Amodei, the founders of rival AI lab Anthropic -- parted ways with the company and publicly criticized what they regarded as a culture that prioritized rapid development over safety. This issue was also at the center of the short-lived ousting by the company board of Altman, which backfired spectacularly. Also: OpenAI used to test its AI models for months - now it's days. Why that matters OpenAI and its competitors have another reason to paint the future of AI so optimistically: to keep investor dollars flowing amid fears of an "AI bubble." For the past three years, tech companies have spent tens of billions on AI, justified by the idea that the technology will soon assist with major scientific breakthroughs and revolutionize productivity, to name a few hopes. But many of those companies (including OpenAI) still aren't profitable; most businesses using AI haven't achieved any material gains from the technology; and the promise of AI-assisted scientific discovery remains mostly hypothetical. Also: As OpenAI hits 1 million business customers, could the AI ROI tide finally be turning? That said, OpenAI recently announced it had acquired a million business customers, many of whom reported bigger AI-driven profits than the industry at large has seen thus far -- indicating a possible turning point in the ROI narrative.
[2]
Superintelligence? No, but AI will still mean big changes
Experts may be skeptical about corporate AI hype to varying degrees, but they share the view that machine learning models will have a significant effect on society. The Forecasting Research Institute (FRI) has published a report [PDF] titled "The Longitudinal Expert AI Panel" that attempts to distill the forecasts of knowledgeable folk - mainly men - in industry, academia, and policy about the capabilities, adoption, and impact of AI in the years ahead. The research project, led by Ezra Karger, an economist with the Federal Reserve Bank of Chicago, suggests that few AI experts believe "superintelligence" as outlined by the likes of Anthropic CEO Dario Amodei will arrive anytime soon. But the 339 respondents participating in the project - AI and ML scientists, economists, technical staff at frontier AI companies, and policy experts from NGOs - believe that AI will spur significant social changes by 2040. "The median expert foresees that by 2030 AI will be responsible for 7 percent of US electricity usage, assist in 18 percent of work hours in the US, and provide daily companionship for 15 percent of adults - roughly 7x, 4x, and 2.5x current levels, respectively," the report says. A year ago, between 1 and 5 percent of all US work hours were assisted by generative AI, according to an economic paper, "The Rapid Adoption of Generative AI." FRI's report also finds about 20 percent of ride-hailing trips will involve autonomous vehicles by the start of the next decade, panelists predict; among the general public, the expectation is that only 12 percent of ride-hailing trips will involve robocars by 2030. Those surveyed see no end to the spending: annual global private investment is projected to reach $260 billion, up from $130 billion in 2024. The experts appear to be none too concerned about the popping of what looks a lot like an AI investment bubble. Whether any of the frontier model leaders like Anthropic and OpenAI will have found a way by 2030 to profit from more expansive adoption of AI isn't addressed. Companies selling the picks and shovels for the AI gold rush - AWS, Google, Microsoft, and Nvidia - can at least expect avid usage of their cloud infrastructure. And these cloud hyperscalers may just end up investing more and more into Anthropic and OpenAI until they own them entirely. Opinions diverged more substantially in terms of predictions about AI usage in drug discovery. The top 25 percent of experts estimated that the majority of revenue from newly approved US drugs by 2040 will be attributable to AI discoveries. The bottom quartile expects less than 10 percent to come from AI. There's a similar divide in whether AI will independently solve or assist in solving a Millennium Prize Problem by 2040. About a quarter of experts surveyed expect that AI will be up to the math challenge (>81 percent chance, they predict), while another quarter of experts believe that AI doesn't have the right stuff (<30 percent chance, they predict). The median expert also sees AI advancing more slowly than makers of frontier AI models, which foresee human- or superhuman-level intelligence in the 2026-2029 period. The average expert gives the rosy vendor view about a 23 percent chance of being realized, while putting the chance of AI progress stopping around current levels at 28 percent. It's perhaps worth noting that 78 percent of respondents identified as male, 15 percent are affiliated with effective altruism [PDF], and 18 percent are affiliated with top AI labs. The survey does not address any of the ethical issues related to the training, deployment, and commercialization of AI technology. Median panelists say there's only about a 20 to 25 percent chance that the AI train will be slowed by lack of AI literacy, societal unease, lack of use cases, and costs. Data quality, regulations, and cultural resistance are seen as more likely (30 to 35 percent) barriers to adoption. Integration and unreliability are expected to be the most significant obstacles (40 percent). The various experts also had differing views about AI's impact on employment. The median expert forecast 2 percent growth in white-collar jobs between January 2025 and December 2030. But seeing as the historical trend predicted 6.8 percent white-collar job growth, this represents a lower rate of job growth. More than 75 percent of the experts predict slower white-collar job growth than current trends, and 25 percent of experts anticipate 4 percent white-collar job loss by 2030. Gustavo de Souza, an economist with the Federal Reserve Bank of Chicago, on Monday presented research showing that in Brazil, "AI significantly increased employment in production-related occupations, such as manufacturing, maintenance, and agriculture, while it reduced employment in administrative jobs," as he explained in a summary post. AI, de Souza found, allows less skilled workers to perform tasks that previously required more experience. While office workers can expect some of their tasks to be automated away, he asserts that the overall impact is a net improvement in wages and greater wage equality. "This shift reduces the barriers to entry in high-AI-exposed occupations, increases the hiring of lower-skilled workers, and erodes the wage premium for high-skilled individuals," he wrote. Those whose labor can be replaced with AI will likely be hurt, while those whose labor is complemented by AI will likely benefit. Overall, he argues, AI has had a positive impact in Brazil and he expects the effect could be even broader in the US. That said, he also calls for policies and programs to help workers transition from occupations made redundant by AI. ®
[3]
Big Tech Says Superintelligent AI Is in Sight. The Average Expert Disagrees
The top leaders in AI have very lofty expectations for the future of artificial intelligence. "By 2026 or 2027, we will have AI systems that are broadly better than almost all humans at almost all things," Anthropic CEO Dario Amodei said earlier this year at the World Economic Forum in Davos, Switzerland. Elon Musk posited late last year that we will have AI systems that are more intelligent than any single human by the end of this year, and we will "100%" have an AI system that exceeds the intelligence of all human beings combined by 2030. OpenAI CEO Sam Altman told Bloomberg earlier this year that he thinks artificial general intelligence (AGI) will "probably get developed" before the end of Trump's presidential term. Superintelligence or AGI is a super-powered future AI system that could theoretically outperform human intelligence on all fronts, and it has become the North Star of the tech industry. Meta, for example, has a whole division and a multibillion-dollar spending spree dedicated to building superintelligence, and CEO Mark Zuckerberg claims it is "in sight." Although most of the AI leaders with the loudest microphones claim that artificial superintelligence is imminent, a new study paints a differing picture of expert sentiment. According to the Forecasting Research Institute's "The Longitudinal Expert AI Panel" research report, the timeline for superintelligent AI will be much slower than what's been promised. Led by multi-disciplinary experts like Federal Reserve Board of Chicago economist Ezra Karger, the fact-finders say they reached out to computer scientists, economists, industry professionals, and AI researchers "whom policymakers, business and nonprofit leaders, and other stakeholders would be most inclined to consult regarding the progression of AI capabilities and its technological impact." The experts gave the tech leaders' timeline of rapid progress only a 23% chance of actually happening. The rapid progress described in the study is one in which "AI writes Pulitzer Prize-worthy novels, collapses years-long research into days and weeks, outcompetes any human software engineer, and independently develops new cures for cancer." So, pretty much mirroring how Silicon Valley describes artificial superintelligence. "Radical change in major systems just takes longer than 4-5 years. I also think that [in] many of these domains, even unexpectedly fast advancement in AIs will not easily translate to improvements for quite some time because of unexpected barriers," one expert respondent wrote. "To paraphrase an old saying, every job looks easy for those not actually doing it." Another expert thought that bottlenecks would intervene before AI's impact could scale up to the level predicted. "The force of its impact will likely be slowed by bottlenecks in areas AI hasn't yet conquered," the expert wrote. "There likely exist thousands (millions?) of potential bottlenecks in the economy which will only become legible as other processes are sped up by orders of magnitude." Some of the advances in AI model capabilities have also been brought into question recently, after a new Oxford study found that a lot of the popular benchmarking tools used to test the performance of AI models were unreliable or misleading. Not all tech leaders have the same, unwavering belief in the future of superintelligence, though. Microsoft's AI chief, Mustafa Suleyman, is famously a non-believer, going so far as to call the pursuit of superintelligence "absurd." Another tech titan, Salesforce CEO Marc Benioff, recently called the hype around artificial general intelligence an example of "hypnosis." Meanwhile, some tech experts that do believe superintelligence is imminent are not particularly thrilled about that possibility. In October, a statement calling for the prohibition on the development of superintelligence until certain conditions are met was signed by more than 100,000 people, including the likes of Apple co-founder Steve Wozniak, and computer scientists Geoffrey Hinton and Yoshua Bengio, both of whom are considered "the godfathers of AI." Even though most experts in the study disagreed that AI will evolve at light speed to reach superintelligence by the end of the decade, the average expert still believes in its transformational power. According to the study, experts predict that AI will have a significant impact by 2040, labeling it as the "technology of the century" akin to electricity. And by 2030, experts believe that AI will provide daily companionship for roughly 15% of adults and assist in 18% of work hours in the U.S. Some of the experts included in the study are described by the researchers as "superforecasters." But AI is a tricky thing to forecast, even for so-called superforecasters, as it turns out. A past study of AI experts and superforecasters, also conducted by the Forecasting Research Institute, found that both groups had underestimated just how fast AI could progress. For example, experts thought in 2022 that AI would get a gold medal in the International Mathematical Olympiad in 2030, and superforecasters said 2035. But a Google-built AI system won that gold in July of this year.
Share
Share
Copy Link
A major study reveals that AI experts are far more skeptical about achieving superintelligence by 2030 than Big Tech leaders claim, while OpenAI warns of potential catastrophic risks in AI development.
A stark divide has emerged between Silicon Valley's most prominent AI leaders and the broader expert community regarding the timeline for achieving superintelligent AI. While tech executives like Anthropic's Dario Amodei predict AI systems will be "broadly better than almost all humans at almost all things" by 2026 or 2027, and OpenAI's Sam Altman expects artificial general intelligence before the end of Trump's presidential term, a comprehensive new study suggests these timelines are overly optimistic
1
3
.
Source: Gizmodo
The Forecasting Research Institute's "Longitudinal Expert AI Panel" surveyed 339 specialists including AI and machine learning scientists, economists, technical staff at frontier AI companies, and policy experts. Their findings reveal that experts give the tech leaders' rapid progress timeline only a 23% chance of actually happening
2
3
.In a striking contradiction, OpenAI itself has begun acknowledging the risks associated with the very technology it's racing to develop. In a recent blog post titled "AI Progress and Recommendations," the company warned that superintelligent AI could be "potentially catastrophic" while simultaneously outlining its vision for the broad-scale social benefits such technology could provide
1
.
Source: ZDNet
The company suggested that the industry might need to "slow development to more carefully study these systems as we get closer to systems capable of recursive self-improvement." This recommendation echoes concerns raised by the Future of Life Institute, whose statement signed by tech luminaries including Geoffrey Hinton and Steve Wozniak warned that superintelligent AI could escape human control and pose an existential threat to civilization
1
.Despite skepticism about superintelligence timelines, experts still anticipate significant AI adoption and impact by 2030. The median expert forecasts that AI will be responsible for 7% of US electricity usage, assist in 18% of work hours in the US, and provide daily companionship for 15% of adults by 2030 – representing roughly 7x, 4x, and 2.5x current levels, respectively
2
.Global private investment in AI is projected to reach $260 billion annually by 2030, up from $130 billion in 2024. Experts also predict that about 20% of ride-hailing trips will involve autonomous vehicles by the start of the next decade, compared to the general public's expectation of only 12%
2
.Related Stories
The study reveals mixed predictions about AI's impact on employment. While the median expert forecasts 2% growth in white-collar jobs between January 2025 and December 2030, this represents a significant slowdown from the historical trend of 6.8% white-collar job growth. More than 75% of experts predict slower white-collar job growth than current trends, with 25% anticipating 4% white-collar job loss by 2030
2
.
Source: The Register
Research from Brazil provides additional context, showing that AI " significantly increased employment in production-related occupations, such as manufacturing, maintenance, and agriculture, while it reduced employment in administrative jobs." The technology appears to allow less skilled workers to perform tasks that previously required more experience
2
.Experts identify several potential obstacles to widespread AI adoption. Integration challenges and system unreliability are expected to be the most significant barriers, with a 40% likelihood of impeding progress. Data quality issues, regulations, and cultural resistance are seen as moderately likely obstacles (30-35% chance), while factors like lack of AI literacy, societal unease, and costs are viewed as less likely to slow adoption (20-25% chance)
2
.Summarized by
Navi
[2]
1
Technology

2
Technology

3
Business and Economy
