2 Sources
[1]
OpenAI's Altman, Ethereum's Buterin Outline Competing Visions for AI's Future - Decrypt
This week, two of tech's most influential voices offered contrasting visions of artificial intelligence development, highlighting the growing tension between innovation and safety. CEO Sam Altman revealed Sunday evening in a blog post about his company's trajectory that OpenAI has tripled its user base to over 300 million weekly active users as it races toward artificial general intelligence (AGI). "We are now confident we know how to build AGI as we have traditionally understood it," Altman said, claiming that in 2025, AI agents could "join the workforce" and "materially change the output of companies." Altman says OpenAI is headed toward more than just AI agents and AGI, saying that the company is beginning to work on "superintelligence in the true sense of the word." A timeframe for the delivery of AGI or superintelligence is unclear. OpenAI did not immediately respond to a request for comment. But hours earlier on Sunday, Ethereum co-creator Vitalik Buterin proposed using blockchain technology to create global failsafe mechanisms for advanced AI systems, including a "soft pause" capability that could temporarily restrict industrial-scale AI operations if warning signs emerge. Buterin speaks here about "d/acc" or decentralized/defensive acceleration. In the simplest sense, d/acc is a variation on e/acc, or effective acceleration, a philosophical movement espoused by high-profile Silicon Valley figures such as a16z's Marc Andreessen. Buterin's d/acc also supports technological progress but prioritizes developments that enhance safety and human agency. Unlike effective accelerationism (e/acc), which takes a "growth at any cost" approach, d/acc focuses on building defensive capabilities first. "D/acc is an extension of the underlying values of crypto (decentralization, censorship resistance, open global economy, and society) to other areas of technology," Buterin wrote. Looking back at how d/acc has progressed over the past year, Buterin wrote on how a more cautious approach toward AGI and superintelligent systems could be implemented using existing crypto mechanisms such as zero-knowledge proofs. Under Buterin's proposal, major AI computers would need weekly approval from three international groups to keep running. "The signatures would be device-independent (if desired, we could even require a zero-knowledge proof that they were published on a blockchain), so it would be all-or-nothing: there would be no practical way to authorize one device to keep running without authorizing all other devices," Buterin explained. The system would work like a master switch in which either all approved computers run, or none do -- preventing anyone from making selective enforcements. "Until such a critical moment happens, merely having the capability to soft-pause would cause little harm to developers," Buterin noted, describing the system as a form of insurance against catastrophic scenarios. In any case, OpenAI's explosive growth from 2023 -- from 100 million to 300 million weekly users in just two years -- shows how AI adoption is progressing rapidly. From an independent research lab into a major tech company, Altman acknowledged the challenges of building "an entire company, almost from scratch, around this new technology." The proposals reflect broader industry debates around managing AI development. Proponents have previously argued that implementing any global control system would require unprecedented cooperation between major AI developers, governments, and the crypto sector.
[2]
Vitalik Buterin floats 'soft pause' on compute in case of sudden risky AI
Ethereum co-founder Vitalik Buterin says a temporary "pause" to worldwide available compute could be a way to "buy more time for humanity" in case of a possibly harmful form of AI superintelligence. In a Jan. 5 blog post following up his November 2023 post advocating the idea of "defensive accelerationism" or d/acc, Buterin said super intelligent AI could be as little as five years away -- and there's no telling the outcome would be positive. Buterin says a "soft pause" on industrial-scale computer hardware could be an option to slow AI development if this happens -- reducing global available compute power by up to 99% for 1 to 2 years "to buy more time for humanity to prepare." A superintelligence is a theoretical AI model that is typically defined as being far more intelligent than the smartest humans in all fields of expertise. Many tech executives and researchers have aired concerns about AI, with over 2,600 urging in a March 2023 open letter to halt AI development due to "profound risks to society and humanity." Buterin noted that his post introducing d/acc only made "vague appeals to not build risky forms of superintelligence" and wanted to share his thoughts on how to address the scenario "where AI risk is high." However, Buterin said he'd only push for a hardware soft pause if he was "convinced that we need something more 'muscular' than liability rules," -- which would mean those who use, deploy or develop AI could be sued for damages caused by the model. He noted proposals for a hardware pause include finding the location of AI chips and requiring their registration but proposed that industrial-scale AI hardware could be fitted with a chip that only allows it to continue running if it gets a trio of signatures once a week from major international bodies. Related: AI took giant strides in 2024, as AGI comes into view "The signatures would be device-independent (if desired, we could even require a zero-knowledge proof that they were published on a blockchain), so it would be all-or-nothing," Buterin wrote. "There would be no practical way to authorize one device to keep running without authorizing all other devices." The d/acc idea supported by Buterin advocates for a careful approach to developing technology in contrast to effective accelerationism or e/acc, which pushes for unrestricted and unbridled tech changes.
Share
Copy Link
Sam Altman of OpenAI and Vitalik Buterin of Ethereum offer divergent perspectives on AI development, highlighting the tension between rapid innovation and safety measures in the pursuit of artificial general intelligence (AGI) and superintelligence.
OpenAI, under the leadership of CEO Sam Altman, has experienced explosive growth, tripling its user base to over 300 million weekly active users in just two years 1. This surge in adoption underscores the accelerating pace of AI development and its increasing integration into various sectors.
Altman's recent blog post revealed OpenAI's confidence in their ability to build Artificial General Intelligence (AGI) as traditionally understood. He projected that by 2025, AI agents could "join the workforce" and "materially change the output of companies" 1. Furthermore, Altman disclosed that OpenAI is setting its sights beyond AGI, venturing into the realm of "superintelligence in the true sense of the word" 1.
In contrast to OpenAI's ambitious trajectory, Ethereum co-founder Vitalik Buterin proposed a more measured approach to AI development. Buterin introduced the concept of "decentralized/defensive acceleration" (d/acc), which advocates for technological progress while prioritizing safety and human agency 1.
Buterin suggested leveraging blockchain technology to create global failsafe mechanisms for advanced AI systems. One key proposal is a "soft pause" capability that could temporarily restrict industrial-scale AI operations if warning signs emerge 2. This approach aims to provide a form of insurance against potentially catastrophic scenarios.
Buterin's proposed soft pause mechanism involves fitting industrial-scale AI hardware with chips that require weekly approval from three international bodies to continue operating 2. This system would function as an all-or-nothing master switch, preventing selective enforcement and ensuring global compliance.
The Ethereum co-founder emphasized that such a mechanism could reduce global available compute power by up to 99% for 1 to 2 years, potentially "buying more time for humanity to prepare" in the face of rapidly advancing AI capabilities 2.
The divergent approaches of Altman and Buterin reflect a broader industry debate on managing AI development. Altman's stance aligns more closely with "effective accelerationism" (e/acc), which advocates for unrestricted technological advancement. In contrast, Buterin's d/acc philosophy supports a more cautious approach, focusing on building defensive capabilities first 1.
The contrasting visions presented by these influential tech figures highlight the growing tension between innovation and safety in AI development. While OpenAI's rapid growth demonstrates the increasing adoption and potential of AI technologies, Buterin's proposals reflect concerns shared by many in the tech community.
Over 2,600 tech executives and researchers have previously urged for a halt in AI development, citing "profound risks to society and humanity" 2. The debate surrounding AI safety and regulation is likely to intensify as the technology continues to advance at an unprecedented pace.
As the race towards AGI and superintelligence accelerates, the tech industry faces critical decisions about balancing progress with responsible development practices. The coming years will likely see increased discussion and potential implementation of safety measures and regulatory frameworks to address the challenges posed by rapidly evolving AI technologies.
Google is providing free users of its Gemini app temporary access to the Veo 3 AI video generation tool, typically reserved for paying subscribers, for a limited time this weekend.
3 Sources
Technology
18 hrs ago
3 Sources
Technology
18 hrs ago
The UK's technology secretary and OpenAI's CEO discussed a potential multibillion-pound deal to provide ChatGPT Plus access to all UK residents, highlighting the government's growing interest in AI technology.
2 Sources
Technology
2 hrs ago
2 Sources
Technology
2 hrs ago
Multiple news outlets, including Wired and Business Insider, have been duped by AI-generated articles submitted under a fake freelancer's name, raising concerns about the future of journalism in the age of artificial intelligence.
4 Sources
Technology
2 days ago
4 Sources
Technology
2 days ago
Google inadvertently revealed a new smart speaker during its Pixel event, sparking speculation about its features and capabilities. The device is expected to be powered by Gemini AI and could mark a significant upgrade in Google's smart home offerings.
5 Sources
Technology
1 day ago
5 Sources
Technology
1 day ago
As AI and new platforms transform search behavior, brands must adapt their strategies beyond traditional SEO to remain visible in an increasingly fragmented digital landscape.
2 Sources
Technology
1 day ago
2 Sources
Technology
1 day ago