6 Sources
6 Sources
[1]
How the Loudest Voices in AI Went From 'Regulate Us' to 'Unleash Us'
On May 16, 2023, Sam Altman appeared before a subcommittee of the Senate Judiciary. The title of the hearing was "Oversight of AI." The session was a lovefest, with both Altman and the senators celebrating what Altman called AI's "printing press moment" -- and acknowledging that the US needed strong laws to avoid its pitfalls. "We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models," he said. The legislators hung on Altman's every word as he gushed about how smart laws could allow AI to flourish -- but only within firm guidelines that both lawmakers and AI builders deemed vital at that moment. Altman was speaking for the industry, which widely shared his attitude. The battle cry was "Regulate Us!" Two years later, on May 8 of this year, Altman was back in front of another group of senators. The senators and Altman were still singing the same tune, but one pulled from a different playlist. This hearing was called "Winning the AI Race." In DC, the word "oversight" has fallen out of favor, and the AI discourse is no exception. Instead of advocating for outside bodies to examine AI models to assess risks, or for platforms to alert people when they are interacting with AI, committee chair Ted Cruz argued for a path where the government would not only fuel innovation but remove barriers like "overregulation." Altman was on board with that. His message was no longer "regulate me" but "invest in me." He said that overregulation -- like the rules adopted by the European Union or one bill recently vetoed in California would be "disastrous." "We need the space to innovate and to move quickly," he said. Safety guardrails might be necessary, he affirmed, but they needed to involve "sensible regulation that does not slow us down." What happened? For one thing, the panicky moment just after everyone got freaked out by ChatGPT passed, and it became clear that Congress wasn't going to move quickly on AI. But the biggest development is that Donald Trump took back the White House, and hit the brakes on the Biden administration's nuanced, pro-regulation tone. The Trump doctrine of AI regulation seems suspiciously close to that of Trump supporter Marc Andreessen, who declared in his Techno Optimist Manifesto that AI regulation was literally a form of murder because "any deceleration of AI will cost lives." Vice President J.D. Vance made these priorities explicit in an international gathering held in Paris this February. "I'm not here ... to talk about AI safety, which was the title of the conference a couple of years ago," he said. "We believe that excessive regulation of the AI sector could kill a transformative industry just as it's taking off, and we'll make every effort to encourage pro-growth AI policies." The administration later unveiled an AI Action Plan "to enhance America's position as an AI powerhouse and prevent unnecessarily burdensome requirements from hindering private sector innovation." Two foes have emerged in this movement. First is the European Union which has adopted a regulatory regimen that demands transparency and accountability from major AI companies. The White House despises this approach, as do those building AI businesses in the US. But the biggest bogeyman is China. The prospect of the People's Republic besting the US in the "AI Race" is so unthinkable that regulation must be put aside, or done with what both Altman and Cruz described as a "light touch." Some of this reasoning comes from a theory known as "hard takeoff," which posits that AI models can reach a tipping point where lightning-fast self-improvement launches a dizzying gyre of supercapability, also known as AGI. "If you get there first, you dastardly person, I will not be able to catch you," says former Google CEO Eric Schmidt, with the "you" being a competitor (Schmidt had been speaking about China's status as a leader in open source.) Schmidt is one of the loudest voices warning about this possible future. But the White House is probably less interested in the Singularity than it is in classic economic competition. The fear of China pulling ahead on AI is the key driver of current US policy, safety be damned. The party line even objects to individual states trying to fill the vacuum of inaction with laws of their own. The version of the tax-break giving, Medicaid-cutting megabill just passed by the House included a mandated moratorium on any state-level AI legislation for 10 years. That's like eternity in terms of AI progress. (Pundits are saying that this provision won't survive some opposition in the Senate, but it should be noted that almost every Republican in the House voted for it.)
[2]
Two Paths for A.I.
Last spring, Daniel Kokotajlo, an A.I.-safety researcher working at OpenAI, quit his job in protest. He'd become convinced that the company wasn't prepared for the future of its own technology, and wanted to sound the alarm. After a mutual friend connected us, we spoke on the phone. I found Kokotajlo affable, informed, and anxious. Advances in "alignment," he told me -- the suite of techniques used to insure that A.I. acts in accordance with human commands and values -- were lagging behind gains in intelligence. Researchers, he said, were hurtling toward the creation of powerful systems they couldn't control. Kokotajlo, who had transitioned from a graduate program in philosophy to a career in A.I., explained how he'd educated himself so that he could understand the field. While at OpenAI, part of his job had been to track progress in A.I. so that he could construct timelines predicting when various thresholds of intelligence might be crossed. At one point, after the technology advanced unexpectedly, he'd had to shift his timelines up by decades. In 2021, he'd written a scenario about A.I. titled "What 2026 Looks Like." Much of what he'd predicted had come to pass before the titular year. He'd concluded that a point of no return, when A.I. might become better than people at almost all important tasks, and be trusted with great power and authority, could arrive in 2027 or sooner. He sounded scared. Around the same time that Kokotajlo left OpenAI, two computer scientists at Princeton, Sayash Kapoor and Arvind Narayanan, were preparing for the publication of their book, "AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference." In it, Kapoor and Narayanan, who study technology's integration with society, advanced views that were diametrically opposed to Kokotajlo's. They argued that many timelines of A.I.'s future were wildly optimistic; that claims about its usefulness were often exaggerated or outright fraudulent; and that, because of the world's inherent complexity, even powerful A.I. would change it only slowly. They cited many cases in which A.I. systems had been called upon to deliver important judgments -- about medical diagnoses, or hiring -- and had made rookie mistakes that indicated a fundamental disconnect from reality. The newest systems, they maintained, suffered from the same flaw. Recently, all three researchers have sharpened their views, releasing reports that take their analyses further. The nonprofit AI Futures Project, of which Kokotajlo is the executive director, has published "AI 2027," a heavily footnoted document, written by Kokotajlo and four other researchers, which works out a chilling scenario in which "superintelligent" A.I. systems either dominate or exterminate the human race by 2030. It's meant to be taken seriously, as a warning about what might really happen. Meanwhile, Kapoor and Narayanan, in a new paper titled "AI as Normal Technology," insist that practical obstacles of all kinds -- from regulations and professional standards to the simple difficulty of doing physical things in the real world -- will slow A.I.'s deployment and limit its transformational potential. While conceding that A.I. may eventually turn out to be a revolutionary technology, on the scale of electricity or the internet, they maintain that it will remain "normal" -- that is, controllable through familiar safety measures, such as fail-safes, kill switches, and human supervision -- for the foreseeable future. "AI is often analogized to nuclear weapons," they argue. But "the right analogy is nuclear power," which has remained mostly manageable and, if anything, may be underutilized for safety reasons. Which is it: business as usual or the end of the world? "The test of a first-rate intelligence," F. Scott Fitzgerald famously claimed, "is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function." Reading these reports back-to-back, I found myself losing that ability, and speaking to their authors in succession, in the course of a single afternoon, I became positively deranged. "AI 2027" and "AI as Normal Technology" aim to describe the same reality, and have been written by deeply knowledgeable experts, but arrive at absurdly divergent conclusions. Discussing the future of A.I. with Kapoor, Narayanan, and Kokotajlo, I felt like I was having a conversation about spirituality with Richard Dawkins and the Pope. In the parable of the blind men and the elephant, a group of well-intentioned people grapple with an unfamiliar object, failing to agree on its nature because each believes that the part he's encountered defines the whole. That's part of the problem with A.I. -- it's hard to see the whole of something new. But it's also true, as Kapoor and Narayanan write, that "today's AI safety discourse is characterized by deep differences in worldviews." If I were to sum up those differences, I'd say that, broadly speaking, West Coast, Silicon Valley thinkers are drawn to visions of rapid transformation, while East Coast academics recoil from them; that A.I. researchers believe in quick experimental progress, while other computer scientists yearn for theoretical rigor; and that people in the A.I. industry want to make history, while those outside of it are bored of tech hype. Meanwhile, there are barely articulated differences on political and human questions -- about what people want, how technology evolves, how societies change, how minds work, what "thinking" is, and so on -- that help push people into one camp or the other. An additional problem is simply that arguing about A.I. is unusually interesting. That interestingness, in itself, may be proving to be a trap. When "AI 2027" appeared, many industry insiders responded by accepting its basic premises while debating its timelines (why not "AI 2045"?). Of course, if a planet-killing asteroid is headed for Earth, you don't want NASA officials to argue about whether the impact will happen before or after lunch; you want them to launch a mission to change its path. At the same time, the kinds of assertions seen in "AI as Normal Technology" -- for instance, that it might be wise to keep humans in the loop during important tasks, instead of giving computers free rein -- have been perceived as so comparatively bland that they've long gone unuttered by analysts interested in the probability of doomsday. When a technology becomes important enough to shape the course of society, the discourse around it needs to change. Debates among specialists need to make room for a consensus upon which the rest of us can act. The lack of such a consensus about A.I. is starting to have real costs. When experts get together to make a unified recommendation, it's hard to ignore them; when they divide themselves into duelling groups, it becomes easier for decision-makers to dismiss both sides and do nothing. Currently, nothing appears to be the plan. A.I. companies aren't substantially altering the balance between capability and safety in their products; in the budget-reconciliation bill that just passed the House, a clause prohibits state governments from regulating "artificial intelligence models, artificial intelligence systems, or automated decision systems" for ten years. If "AI 2027" is right, and that bill is signed into law, then by the time we're allowed to regulate A.I. it might be regulating us. We need to make sense of the safety discourse now, before the game is over. Artificial intelligence is a technical subject, but describing its future involves a literary truth: the stories we tell have shapes, and those shapes influence their content. There are always trade-offs. If you aim for reliable, levelheaded conservatism, you risk downplaying unlikely possibilities; if you bring imagination to bear, you might dwell on what's interesting at the expense of what's likely. Predictions can create an illusion of predictability that's unwarranted in a fun-house world. In 2019, when I profiled the science-fiction novelist William Gibson, who is known for his prescience, he described a moment of panic: he'd thought he had a handle on the near future, he said, but "then I saw Trump coming down that escalator to announce his candidacy. All of my scenario modules went 'beep-beep-beep.' " We were veering down an unexpected path. "AI 2027" is imaginative, vivid, and detailed. It "is definitely a prediction," Kokotajlo told me recently, "but it's in the form of a scenario, which is a particular kind of prediction." Although it's based partly on assessments of trends in A.I., it's written like a sci-fi story (with charts); it throws itself headlong into the flow of events. Often, the specificity of its imagined details suggests their fungibility. Will there actually come a moment, possibly in June of 2027, when software engineers who've invented self-improving A.I. "sit at their computer screens, watching performance crawl up, and up, and up"? Will the Chinese government, in response, build a "mega-datacenter" in a "Centralized Development Zone" in Taiwan? These particular details make the scenario more powerful, but might not matter; the bottom line, Kokotajlo said, is that, "more likely than not, there is going to be an intelligence explosion, and a crazy geopolitical conflict over who gets to control the A.I.s." It's the details of that "intelligence explosion" that we need to follow. The scenario in "AI 2027" centers on a form of A.I. development known as "recursive self-improvement," or R.S.I., which is currently largely hypothetical. In the report's story, R.S.I. begins when A.I. programs become capable of doing A.I. research for themselves (today, they only assist human researchers); these A.I. "agents" soon figure out how to make their descendants smarter, and those descendants do the same for their descendants, creating a feedback loop. This process accelerates as the A.I.s start acting like co-workers, trading messages and assigning work to one another, forming a "corporation-within-a-corporation" that repeatedly grows faster and more effective than the A.I. firm in which it's ensconced. Eventually, the A.I.s begin creating better descendants so quickly that human programmers don't have time to study them and decide whether they're controllable.
[3]
Opinion | Silicon Valley Is at an Inflection Point
Ms. Hao is the author of "Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI." On his second day in office this year, President Trump underscored his unequivocal support for the tech industry. Standing at a lectern next to tech leaders, he announced the Stargate Project, a plan to pump $500 billion in private investment over four years into artificial intelligence infrastructure. For comparison: The Apollo mission, which sent the first men to the moon, spent around $300 billion in today's dollars over 13 years. Sam Altman, OpenAI's chief executive, played down the investment. "It sounds crazy big now," he said. "I bet it won't sound that big in a few years." In the decade that I have observed Silicon Valley -- first as an engineer, then as a journalist -- I've watched the industry shift into a new paradigm. Tech companies have long reaped the benefits of a friendly U.S. government, but in its early months the Trump administration has made clear that the state will now grant new firepower to the industry's ambitions. The Stargate announcement was just one signal. Another was the Republican tax bill that the House passed last week, which would ban states from regulating A.I. for the next 10 years. The leading A.I. giants are no longer merely multinational corporations; they are growing into modern-day empires. With the full support of the federal government, soon they will be able to reshape most spheres of society as they please, from the political to the economic to the production of science. When I took my first job in Silicon Valley 10 years ago, the industry's wealth and influence were already expanding. The tech giants had grandiose missions -- take Google's, to "organize the world's information" -- which they used to attract young workers and capital investment. But with the promise of developing artificial general intelligence, or A.G.I., those grandiose missions have turned into civilizing ones. Companies claim they will bring humanity into a new, enlightened age -- that they alone have the scientific and moral clarity to control a technology that, in their telling, will usher us to hell if China develops it first. "A.I. companies in the U.S. and other democracies must have better models than those in China if we want to prevail," said Dario Amodei, chief executive of Anthropic, an A.I. start-up. This language is as far-fetched as it sounds, and Silicon Valley has a long history of making promises that never materialize. Yet the narrative that A.G.I. is just around the corner and will usher in "massive prosperity," as Mr. Altman has written, is already leading companies to accrue vast amounts of capital, lay claim to data and electricity, and build enormous data centers that are accelerating the climate crisis. These gains will fortify tech companies' power and erode human rights long after the shine of the industry's promises wears off. Sign up for the Opinion Today newsletter Get expert analysis of the news and a guide to the big ideas shaping the world every weekday morning. Get it sent to your inbox. The quest for A.G.I. is giving companies cover to vacuum up more data than ever before, with profound implications for people's privacy and intellectual property rights. Before investing heavily in generative A.I., Meta had amassed data from nearly four billion accounts, but it no longer considers that enough. To train its generative A.I. models, the company has scraped the web with little regard for copyright and even considered buying up Simon & Schuster to meet the new data imperative. These developments are also convincing companies to escalate their consumption of natural resources. Early drafts of the Stargate Project estimated that its A.I. supercomputer could need about as much power as three million homes. And McKinsey now projects that by 2030, the global grid will need to add around two to six times the energy capacity it took to power California in 2022 to sustain the current rate of Silicon Valley's expansion. "In any scenario, these are staggering investment numbers," McKinsey wrote. One OpenAI employee told me that the company is running out of land and electricity. Meanwhile, there are fewer independent A.I. experts to hold Silicon Valley to account. In 2004, only 21 percent of people graduating from Ph.D. programs in artificial intelligence joined the private sector. In 2020, nearly 70 percent did, one study found. They've been won over by the promise of compensation packages that can easily rise over $1 million. This means that companies like OpenAI can lock down the researchers who might otherwise be asking tough questions about their products and publishing their findings publicly for all to read. Based on my conversations with professors and scientists, ChatGPT's release has exacerbated that trend -- with even more researchers joining companies like OpenAI. This talent monopoly has reoriented the kind of research that's done in this field. Imagine what would happen if most climate science were done by researchers who worked in fossil fuel companies. That's what's happening with artificial intelligence. Already, A.I. companies could be censoring critical research into the flaws and risks of their tools. Four years ago, the leaders of Google's ethical A.I. team said they were ousted after they wrote a paper raising questions about the industry's growing focus on large language models, the technology that underpins ChatGPT and other generative A.I. products. These companies are at an inflection point. With Mr. Trump's election, Silicon Valley's power will reach new heights. The president named David Sacks, a billionaire venture capitalist and A.I. investor, as his A.I. czar, and empowered another tech billionaire, Elon Musk, to slash through the government. Mr. Trump brought a cadre of tech executives with him on his recent trip to Saudi Arabia. If Senate Republicans now vote to prohibit states from regulating A.I. for 10 years, Silicon Valley's impunity will be enshrined in law, cementing these companies' empire status. Their influence now extends well beyond the realm of business. We are now closer than ever to a world in which tech companies can seize land, operate their own currencies, reorder the economy and remake our politics with little consequence. That comes at a cost -- when companies rule supreme, people lose their ability to assert their voice in the political process and democracy cannot hold. Technological progress does not require businesses to operate like empires. Some of the most impactful A.I. advancements came not from tech behemoths racing to recreate human levels of intelligence, but from the development of relatively inexpensive, energy-efficient models to tackle specific tasks such as weather forecasting. DeepMind's AlphaFold built a nongenerative A.I. model that predicts protein structures from their sequences -- a function critical to drug discovery and understanding disease. Its creators were awarded the 2024 Nobel Prize in Chemistry. A.I. tools that help everyone cannot arise from a vision of development that demands the capitulation of the majority to the self-serving agenda of the few. Transitioning to a more equitable and sustainable A.I. future won't be easy: It'll require everyone -- journalists, civil society, researchers, policymakers, citizens -- to push back against the tech giants, produce thoughtful government regulation wherever possible and invest more in smaller-scale A.I. technologies. When people rise, empires fall. Karen Hao is a reporter who covers artificial intelligence. She was formerly a foreign correspondent covering China's technology industry for The Wall Street Journal and a senior editor for A.I. at MIT Technology Review. Source photograph by Gary Yeowell/Getty Images The Times is committed to publishing a diversity of letters to the editor. We'd like to hear what you think about this or any of our articles. Here are some tips. And here's our email: [email protected]. Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.
[4]
OpenAI Can Stop Pretending
The company is great at getting what it wants -- whether or not it's beholden to a nonprofit mission. OpenAI is a strange company for strange times. Valued at $300 billion -- roughly the same as seven Fords or one and a half PepsiCos -- the AI start-up has an era-defining product in ChatGPT and is racing to be the first to build superintelligent machines. The company is also, to the apparent frustration of its CEO Sam Altman, beholden to its nonprofit status. When OpenAI was founded in 2015, it was meant to be a research lab that would work toward the goal of AI that is "safe" and "benefits all of humanity." There wasn't supposed to be any pressure -- or desire, really -- to make money. Later, in 2019, OpenAI created a for-profit subsidiary to better attract investors -- the types of people who might otherwise turn to the less scrupulous corporations that dot Silicon Valley. But even then, that part of the organization was under the nonprofit side's control. At the time, it had released no consumer products and capped how much money its investors could make. Then came ChatGPT. OpenAI's leadership had intended for the bot to provide insight into how people would use AI without any particular hope for widespread adoption. But ChatGPT became a hit, kicking "off a growth curve like nothing we have ever seen," as Altman wrote in an essay this past January. The product was so alluring that the entire tech industry seemed to pivot overnight into an AI arms race. Now, two and a half years since the chatbot's release, Altman says some half a billion people use the program each week, and he is chasing that success with new features and products -- for shopping, coding, health care, finance, and seemingly any other industry imaginable. OpenAI is behaving like a typical business, because its rivals are typical businesses, and massive ones at that: Google and Meta, among others. Read: OpenAI's ambitions just became crystal clear Now 2015 feels like a very long time ago, and the charitable origins have turned into a ball and chain for OpenAI. Last December, after facing concerns from potential investors that pouring money into the company wouldn't pay off because of the nonprofit mission and complicated governance structure, the organization announced plans to change that: OpenAI was seeking to transition to a for-profit. The company argued that this was necessary to meet the tremendous costs of building advanced AI models. A nonprofit arm would still exist, though it would separately pursue "charitable initiatives" -- and it would not have any say over the actions of the for-profit, which would convert into a public-benefit corporation, or PBC. Corporate backers appeared satisfied: In March, the Japanese firm Softbank conditioned billions of dollars in investments on OpenAI changing its structure. Resistance came as swiftly as the new funding. Elon Musk -- a co-founder of OpenAI who has since created his own rival firm, xAI, and seems to take every opportunity to undermine Altman -- wrote on X that OpenAI "was funded as an open source, nonprofit, but has become a closed source, profit-maximizer." He had already sued the company for abandoning its founding mission in favor of financial gain, and claimed that the December proposal was further proof. Many unlikely allies emerged soon after. Attorneys general in multiple states, nonprofit groups, former OpenAI employees, outside AI experts, economists, lawyers, and three Nobel laureates all have raised concerns about the pivot, even petitioning to submit briefs to Musk's lawsuit. OpenAI backtracked, announcing a new plan earlier this month that would have the nonprofit remain in charge. Steve Sharpe, a spokesperson for OpenAI, told me over email that the new proposed structure "puts us on the best path to" build a technology "that could become one of the most powerful and beneficial tools in human history." (The Atlantic entered into a corporate partnership with OpenAI in 2024.) Yet OpenAI's pursuit of industry-wide dominance shows no real signs of having hit a roadblock. The company has a close relationship with the Trump administration and is leading perhaps the biggest AI infrastructure buildout in history. Just this month, OpenAI announced a partnership with the United Arab Emirates and an expansion into personal gadgets -- a forthcoming "family of devices" developed with Jony Ive, former chief design officer at Apple. For-profit or not, the future of AI still appears to be very much in Altman's hands. Why all the worry about corporate structure anyway? Governance, boardroom processes, legal arcana -- these things are not what sci-fi dreams are made of. Yet those concerned with the societal dangers that generative AI, and thus OpenAI, pose feel these matters are of profound importance. The still more powerful artificial "general" intelligence, or AGI, that OpenAI and its competitors are chasing could theoretically cause mass unemployment, worsen the spread of misinformation, and violate all sorts of privacy laws. In the highest-flung doomsday scenarios, the technology brings about civilizational collapse. Altman has expressed these concerns himself -- and so OpenAI's 2019 structure, which gave the nonprofit final say over the for-profit's actions, was meant to guide the company toward building the technology responsibly instead of rushing to release new AI products, sell subscriptions, and stay ahead of competitors. "OpenAI's nonprofit mission, together with the legal structures committing it to that mission, were a big part of my decision to join and remain at the company," Jacob Hilton, a former OpenAI employee who contributed to ChatGPT, among other projects, told me. In April, Hilton and a number of his former colleagues, represented by the Harvard law professor Lawrence Lessig, wrote a letter to the court hearing Musk's lawsuit, arguing that a large part of OpenAI's success depended on its commitment to safety and the benefit of humanity. To renege on, or at least minimize, that mission was a betrayal. The concerns extend well beyond former employees. Geoffrey Hinton, a computer scientist at the University of Toronto who last year received a Nobel Prize for his AI research, told me that OpenAI's original structure would better help "prevent a super intelligent AI from ever wanting to take over." Hinton is one of the Nobel laureates who has publicly opposed the tech company's for-profit shift, alongside the economists Joseph Stiglitz and Oliver Hart. The three academics, joining a number of influential lawyers, economists, and AI experts, in addition to several former OpenAI employees, including Hilton, signed an open letter in April urging the attorneys general in Delaware and California -- where the company's nonprofit was incorporated and where the company is headquartered, respectively -- to closely investigate the December proposal. According to its most recent tax filing, OpenAI is intended to build AGI "that safely benefits humanity, unconstrained by a need to generate financial return," so disempowering the nonprofit seemed, to the signatories, self-evidently contradictory. Read: 'We're definitely going to build a bunker before we release AGI' In its initial proposal to transition to a for-profit, OpenAI still would have had some accountability as a public-benefit corporation: A PBC legally has to try to make profits for shareholders alongside pursuing a designated "public benefit" (in this case, building "safe" and "beneficial" AI as outlined in OpenAI's founding mission). In its December announcement, OpenAI described the restructure as "the next step in our mission." But Michael Dorff, another signatory to the open letter and a law professor at UCLA who studies public-benefit corporations, explained to me that PBCs aren't necessarily an effective way to bring about public good. "They are not great enforcement tools," he said -- they can "nudge" a company toward a given cause but do not give regulators much authority over that commitment. (Anthropic and xAI, two of OpenAI's main competitors, are also public-benefit corporations.) OpenAI's proposed conversion also raised a whole other issue -- a precedent for taking resources accrued under charitable intentions and repurposing them for profitable pursuits. And so yet another coalition, composed of nonprofits and advocacy groups, wrote its own petition for OpenAI's plans to be investigated, with the aim of preventing charitable organizations from being leveraged for financial gain in the future. Regulators, it turned out, were already watching. Three days after OpenAI's December announcement of the plans to revoke nonprofit oversight, Kathy Jennings, the attorney general of Delaware, notified the court presiding over Musk's lawsuit that her office was reviewing the proposed restructure to ensure that the corporation was fulfilling its charitable interest to build AI that benefits all of humanity. California's attorney general, Rob Bonta, was reviewing the restructure, as well. This ultimately led OpenAI to change plans. "We made the decision for the nonprofit to stay in control after hearing from civic leaders and having discussions with the offices of the Attorneys General of California and Delaware," Altman wrote in a letter to OpenAI employees earlier this month. The for-profit, meanwhile, will still transition to a PBC. The new plan is not yet a done deal: The offices of the attorneys general told me that they are reviewing the new proposal. Microsoft, OpenAI's closest corporate partner, has not yet agreed to the new structure. One could be forgiven for wondering what all the drama is for. Amid tension over OpenAI's corporate structure, the organization's corporate development hasn't so much as flinched. In just the past few weeks, the company has announced a new CEO of applications, someone to directly oversee and expand business operations; OpenAI for Countries, an initiative focused on building AI infrastructure around the world; and Codex, a powerful AI "agent" that does coding tasks. To OpenAI, these endeavors legitimately contribute to benefiting humanity: building more and more useful AI tools; bringing those tools and the necessary infrastructure to run them to people around the world; drastically increasing the productivity of software engineers. No matter OpenAI's ultimate aims, in a race against Google and Meta, some commercial moves are necessary to stay ahead. And enriching OpenAI's investors and improving people's lives are not necessarily mutually exclusive. The greater issue is this: There is no universal definition for "safe" or "beneficial" AI. A chatbot might help doctors process paperwork faster and help a student float through high school without learning a thing; an AI research assistant could help climate scientists arrive at novel insights while also consuming huge amounts of water and fossil fuels. Whatever definition OpenAI applies will be largely determined by its board. Altman, in his May letter to employees, contended that OpenAI is on the best path "to continue to make rapid, safe progress and to put great AI in the hands of everyone." But everyone, in this case, has to trust OpenAI's definition of safe progress. The nonprofit has not always been the most effective check on the company. In 2023, the nonprofit board -- which then and now had "control" over the for-profit subsidiary -- removed Altman from his position as CEO. But the company's employees revolted, and he was reinstated shortly thereafter with the support of Microsoft. In other words, "control" on paper does not always amount to much in reality. Sharpe, the OpenAI spokesperson, said the nonprofit will be able to appoint and remove directors to OpenAI's separate for-profit board, but declined to clarify whether its board will be able to remove executives (such as the CEO). The company is "continuing to work through the specific governance mandate in consultation with relevant stakeholders," he said. Sharpe also told me that OpenAI will remove the cap on shareholder returns, which he said will satisfy the conditions for SoftBank's billions of dollars in investment. A top SoftBank executive has said "nothing has really changed" with OpenAI's restructure, despite the nonprofit retaining control. If investors are now satisfied, the underlying legal structure is irrelevant. Marc Toberoff, a lawyer representing Musk in his lawsuit against OpenAI, wrote in a statement that "SoftBank pulled back the curtain on OpenAI's corporate theater and said the quiet part out loud. OpenAI's recent 'restructuring' proposal is nothing but window dressing." Lessig, the lawyer who represented the former OpenAI employees, told me that "it's outrageous that we are allowing the development of this potentially catastrophic technology with nobody at any level doing any effective oversight of it." Two years ago, Altman, in Senate testimony, seemed to agree with that notion: He told lawmakers that "regulatory intervention by governments will be critical to mitigate the risks" of powerful AI. But earlier this month, only a few days after writing to his employees and investors that "as AI accelerates, our commitment to safety grows stronger," he told the Senate something else: Too much regulation would be "disastrous" for America's AI industry. Perhaps -- but it might also be in the best interests of humanity.
[5]
Why we're unlikely to get artificial general intelligence anytime soon
Sam Altman, the CEO of OpenAI, recently told President Donald Trump during a private phone call that it would arrive before the end of his administration. Dario Amodei, the CEO of Anthropic, OpenAI's primary rival, repeatedly told podcasters it could happen even sooner. Tech billionaire Elon Musk has said it could be here before the end of the year. Like many other voices across Silicon Valley and beyond, these executives predict that the arrival of artificial general intelligence, or AGI, is imminent. Since the early 2000s, when a group of fringe researchers slapped the term on the cover of a book that described the autonomous computer systems they hoped to build one day, AGI has served as shorthand for a future technology that achieves human-level intelligence. There is no settled definition of AGI, just an entrancing idea: an artificial intelligence that can match the many powers of the human mind. Altman, Amodei and Musk have long chased this goal, as have executives and researchers at companies like Google and Microsoft. And thanks, in part, to their fervent pursuit of this ambitious idea, they have produced technologies that are changing the way hundreds of millions of people research, make art and program computers. These technologies are now poised to transform entire professions. But since the arrival of chatbots like OpenAI's ChatGPT and the rapid improvement of these strange and powerful systems over the last two years, many technologists have grown increasingly bold in predicting how soon AGI will arrive. Some are even saying that once they deliver AGI, a more powerful creation called "superintelligence" will follow. As these eternally confident voices predict the near future, their speculations are getting ahead of reality. And though their companies are pushing the technology forward at a remarkable rate, an army of more sober voices are quick to dispel any claim that machines will soon match human intellect. "The technology we're building today is not sufficient to get there," said Nick Frosst, a founder of the AI startup Cohere who previously worked as a researcher at Google and studied under the most revered AI researcher of the last 50 years. "What we are building now are things that take in words and predict the next most likely word, or they take in pixels and predict the next most likely pixel. That's very different from what you and I do." In a recent survey of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society that includes some of the most respected researchers in the field, more than three-quarters of respondents said the methods used to build today's technology were unlikely to lead to AGI. Opinions differ in part because scientists cannot even agree on a way of defining human intelligence, arguing endlessly over the merits and flaws of IQ tests and other benchmarks. Comparing our own brains to machines is even more subjective. This means that identifying AGI is essentially a matter of opinion. (Last year, as part of a high-profile lawsuit, Musk's attorneys said it was already here because OpenAI, one of Musk's chief rivals, has signed a contract with its main funder saying it will not sell products based on AGI technology.) And scientists have no hard evidence that today's technologies are capable of performing even some of the simpler things the brain can do, like recognizing irony or feeling empathy. Claims of AGI's imminent arrival are based on statistical extrapolations -- and wishful thinking. According to various benchmark tests, today's technologies are improving at a consistent rate in some notable areas, like math and computer programming. But these tests describe only a small part of what people can do. Humans know how to deal with a chaotic and constantly changing world. Machines struggle to master the unexpected -- the challenges, small and large, that do not look like what has happened in the past. Humans can dream up ideas that the world has never seen. Machines typically repeat or enhance what they have seen before. That is why Frosst and other skeptics say pushing machines to human-level intelligence will require at least one big idea that the world's technologists have not yet dreamed up. There is no way of knowing how long that will take. "A system that's better than humans in one way will not necessarily be better in other ways," Harvard University cognitive scientist Steven Pinker said. "There's just no such thing as an automatic, omniscient, omnipotent solver of every problem, including ones we haven't even thought of yet. There's a temptation to engage in a kind of magical thinking. But these systems are not miracles. They are very impressive gadgets." 'AI can get there' Chatbots like ChatGPT are driven by what scientists call neural networks, mathematical systems that can identify patterns in text, images and sounds. By pinpointing patterns in vast troves of Wikipedia articles, news stories and chat logs, for instance, these systems can learn to generate humanlike text on their own, like poems and computer programs. That means these systems are progressing much faster than computer technologies of the past. In previous decades, software engineers built applications one line of code at time, a tiny-step-by-tiny-step process that could never produce something as powerful as ChatGPT. Because neural networks can learn from data, they can reach new heights and reach them quickly. After seeing the improvement of these systems over the last decade, some technologists believe the progress will continue at much the same rate -- to AGI and beyond. "There are all these trends where all of the limitations are going away," said Jared Kaplan, the chief science officer at Anthropic. "AI intelligence is quite different from human intelligence. Humans learn much more easily to do new tasks. They don't need to practice as much as AI needs to. But eventually, with more practice, AI can get there." Among AI researchers, Kaplan is known for publishing a groundbreaking academic paper that described what are now called "the Scaling Laws." These laws essentially said the more data an AI system analyzed, the better it would perform. Just as a student learns more by reading more books, an AI system finds more patterns in the text and learns to more accurately mimic the way people put words together. In recent months, companies like OpenAI and Anthropic used up just about all of the English text on the internet, which meant they needed a new way of improving their chatbots. So they are leaning more heavily on a technique that scientists call reinforcement learning. Through this process, which can extend over weeks or months, a system can learn behavior through trial and error. By working through thousands of math problems, for instance, it can learn which techniques tend to lead to the right answer and which do not. Thanks to this technique, researchers like Kaplan believe that the Scaling Laws (or something like them) will continue. As the technology continues to learn through trial and error across myriad fields, researchers say, it will follow the path of AlphaGo, a machine built in 2016 by a team of Google researchers. Through reinforcement learning, AlphaGo learned to master the game of Go, a complex Chinese board game that is compared to chess, by playing millions of games against itself. That spring, it beat one of the world's best players, stunning the AI community and the world. Most researchers had assumed that AI needed another 10 years to achieve such a feat. The gap between humans and machines It is indisputable that today's machines have already eclipsed the human brain in some ways, but that has been true for a long time. A calculator can do basic math faster than a human. Chatbots like ChatGPT can write faster, and as they write, they can instantly draw on more texts than any human brain could ever read or remember. These systems are exceeding human performance on some tests involving high-level math and coding. But people cannot be reduced to these benchmarks. "There are many kinds of intelligence out there in the natural world," said Josh Tenenbaum, a professor of computational cognitive science at the Massachusetts Institute of Technology. One obvious difference is that human intelligence is tied to the physical world. It extends beyond words and numbers and sounds and images into the realm of tables and chairs and stoves and frying pans and buildings and cars and whatever else we encounter with each passing day. Part of intelligence is knowing when to flip a pancake sitting on the griddle. Some companies are training humanoid robots in much the same way that others are training chatbots. But this is more difficult and more time-consuming than building ChatGPT, requiring extensive training in physical labs, warehouses and homes. Robotic research is years behind chatbot research. The gap between human and machine is even wider. In the physical and digital realms, machines still struggle to match the parts of human intelligence that are harder to define. "AI needs us: living beings, producing constantly, feeding the machine," said Matteo Pasquinelli, a professor of the philosophy of science at Ca' Foscari University in Venice, Italy. "It needs the originality of our ideas and our lives." A thrilling fantasy For people inside the tech industry and out, claims of imminent AGI can be thrilling. Humans have dreamed of creating an artificial intelligence going back to the myth of the Golem, which appeared as early as the 12th century. This is the fantasy that drives works like Mary Shelley's "Frankenstein" and Stanley Kubrick's "2001: A Space Odyssey." Now that many of us are using computer systems that can write and even talk like we do, it is only natural for us to assume that intelligent machines are almost here. It is what we have anticipated for centuries. When a group of academics founded the AI field in the late 1950s, they were sure it wouldn't take very long to build computers that re-created the brain. Some argued that a machine would beat the world chess champion and discover its own mathematical theorem within a decade. But none of that happened on that time frame. Some of it still hasn't. Many of the people building today's technology see themselves as fulfilling a kind of technological destiny, pushing toward an inevitable scientific moment, like the creation of fire or the atomic bomb. But they cannot point to a scientific reason that it will happen soon. That is why many other scientists say no one will reach AGI without a new idea -- something beyond the powerful neural networks that merely find patterns in data. That new idea could arrive tomorrow. But even then, the industry would need years to develop it. Yann LeCun, the chief AI scientist at Meta, has dreamed of building what we now call AGI since he saw "2001: A Space Odyssey" in 70mm Cinerama at a Paris movie theater when he was 9 years old. And he was among the three pioneers who won the 2018 Turing Award -- considered the Nobel Prize of computing -- for their early work on neural networks. But he does not believe that AGI is near. At Meta, his research lab is looking beyond the neural networks that have entranced the tech industry. LeCun and his colleagues are searching for the missing idea. "A lot is riding on figuring out whether the next-generation architecture will deliver human-level AI within the next 10 years," he said. "It may not. At this point, we can't tell."
[6]
Silicon Valley is at an inflection point
Under President Trump's administration, tech giants are gaining unprecedented power, fueled by initiatives like the Stargate Project and favorable tax policies. This support allows them to expand their influence across various sectors, raising concerns about data privacy, resource consumption, and the concentration of AI research within these companies.On his second day in office this year, President Trump underscored his unequivocal support for the tech industry. Standing at a lectern next to tech leaders, he announced the Stargate Project, a plan to pump $500 billion in private investment over four years into artificial intelligence infrastructure. For comparison: The Apollo mission, which sent the first men to the moon, spent around $300 billion in today's dollars over 13 years. Sam Altman, OpenAI's chief executive, played down the investment. "It sounds crazy big now," he said. "I bet it won't sound that big in a few years." In the decade that I have observed Silicon Valley -- first as an engineer, then as a journalist -- I've watched the industry shift into a new paradigm. Tech companies have long reaped the benefits of a friendly U.S. government, but in its early months the Trump administration has made clear that the state will now grant new firepower to the industry's ambitions. The Stargate announcement was just one signal. Another was the Republican tax bill that the House passed last week, which would ban states from regulating AI for the next 10 years. The leading AI giants are no longer merely multinational corporations; they are growing into modern-day empires. With the full support of the federal government, soon they will be able to reshape most spheres of society as they please, from the political to the economic to the production of science. When I took my first job in Silicon Valley 10 years ago, the industry's wealth and influence were already expanding. The tech giants had grandiose missions -- take Google's, to "organise the world's information" -- which they used to attract young workers and capital investment. But with the promise of developing artificial general intelligence, or AGI, those grandiose missions have turned into civilising ones. Companies claim they will bring humanity into a new, enlightened age -- that they alone have the scientific and moral clarity to control a technology that, in their telling, will usher us to hell if China develops it first. "AI companies in the U.S. and other democracies must have better models than those in China if we want to prevail," said Dario Amodei, chief executive of Anthropic, an AI start-up. This language is as far-fetched as it sounds, and Silicon Valley has a long history of making promises that never materialize. Yet the narrative that AGI is just around the corner and will usher in "massive prosperity," as Mr. Altman has written, is already leading companies to accrue vast amounts of capital, lay claim to data and electricity, and build enormous data centers that are accelerating the climate crisis. These gains will fortify tech companies' power and erode human rights long after the shine of the industry's promises wears off. The quest for A.G.I. is giving companies cover to vacuum up more data than ever before, with profound implications for people's privacy and intellectual property rights. Before investing heavily in generative AI, Meta had amassed data from nearly four billion accounts, but it no longer considers that enough. To train its generative AI models, the company has scraped the web with little regard for copyright and even considered buying up Simon & Schuster to meet the new data imperative. These developments are also convincing companies to escalate their consumption of natural resources. Early drafts of the Stargate Project estimated that its AI supercomputer could need about as much power as three million homes. And McKinsey now projects that by 2030, the global grid will need to add around two to six times the energy capacity it took to power California in 2022 to sustain the current rate of Silicon Valley's expansion. "In any scenario, these are staggering investment numbers," McKinsey wrote. One OpenAI employee told me that the company is running out of land and electricity. Meanwhile, there are fewer independent AI experts to hold Silicon Valley to account. In 2004, only 21 percent of people graduating from Ph.D. programs in artificial intelligence joined the private sector. In 2020, nearly 70 percent did, one study found. They've been won over by the promise of compensation packages that can easily rise over $1 million. This means that companies like OpenAI can lock down the researchers who might otherwise be asking tough questions about their products and publishing their findings publicly for all to read. Based on my conversations with professors and scientists, ChatGPT's release has exacerbated that trend -- with even more researchers joining companies like OpenAI. This talent monopoly has reoriented the kind of research that's done in this field. Imagine what would happen if most climate science were done by researchers who worked in fossil fuel companies. That's what's happening with artificial intelligence. Already, AI companies could be censoring critical research into the flaws and risks of their tools. Four years ago, the leaders of Google's ethical AI team said they were ousted after they wrote a paper raising questions about the industry's growing focus on large language models, the technology that underpins ChatGPT and other generative AI products. These companies are at an inflection point. With Mr. Trump's election, Silicon Valley's power will reach new heights. The president named David Sacks, a billionaire venture capitalist and AI investor, as his AI czar, and empowered another tech billionaire, Elon Musk, to slash through the government. Mr. Trump brought a cadre of tech executives with him on his recent trip to Saudi Arabia. If Senate Republicans now vote to prohibit states from regulating AI for 10 years, Silicon Valley's impunity will be enshrined in law, cementing these companies' empire status. Their influence now extends well beyond the realm of business. We are now closer than ever to a world in which tech companies can seize land, operate their own currencies, reorder the economy and remake our politics with little consequence. That comes at a cost -- when companies rule supreme, people lose their ability to assert their voice in the political process and democracy cannot hold. Technological progress does not require businesses to operate like empires. Some of the most impactful AI advancements came not from tech behemoths racing to recreate human levels of intelligence, but from the development of relatively inexpensive, energy-efficient models to tackle specific tasks such as weather forecasting. DeepMind's AlphaFold built a nongenerative AI model that predicts protein structures from their sequences -- a function critical to drug discovery and understanding disease. Its creators were awarded the 2024 Nobel Prize in Chemistry. AI tools that help everyone cannot arise from a vision of development that demands the capitulation of the majority to the self-serving agenda of the few. Transitioning to a more equitable and sustainable AI future won't be easy: It'll require everyone -- journalists, civil society, researchers, policymakers, citizens -- to push back against the tech giants, produce thoughtful government regulation wherever possible and invest more in smaller-scale AI technologies. When people rise, empires fall.
Share
Share
Copy Link
This story explores the dramatic shift in AI industry attitudes towards regulation, from initially welcoming oversight to now resisting it, driven by fears of falling behind China and losing the "AI race".
In a striking reversal, the artificial intelligence industry has dramatically shifted its stance on regulation. Initially welcoming oversight, major players now advocate for minimal interference, citing fears of falling behind in the global "AI race". This change reflects broader shifts in the political landscape and intensifying international competition, particularly with China.
In May 2023, OpenAI CEO Sam Altman testified before a Senate subcommittee, advocating for strong AI regulations. He stated, "We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models"
1
. This sentiment was widely shared across the industry at the time.However, by May 2025, Altman's message had changed significantly. In a subsequent Senate hearing, he warned that overregulation would be "disastrous" and emphasized the need for "space to innovate and to move quickly"
1
. This shift from "regulate me" to "invest in me" mirrors a broader change in industry attitudes.The change in stance coincides with significant political shifts. The Trump administration's return to power has brought a markedly different approach to AI regulation. The new doctrine, as articulated by Vice President J.D. Vance, prioritizes "pro-growth AI policies" over safety concerns
1
. This aligns closely with the views of tech industry figures like Marc Andreessen, who equated AI regulation with "murder" due to potential lost opportunities1
.Source: Wired
Fear of falling behind China in AI development has become a primary driver of U.S. policy. This concern has led to initiatives like the Stargate Project, a $500 billion investment plan for AI infrastructure
3
. The focus on maintaining a competitive edge is reshaping regulatory discussions, with safety considerations often taking a back seat to innovation and growth.As AI companies grow in influence, their resource demands are escalating dramatically. OpenAI's expansion plans, for instance, are constrained by limitations in land and electricity availability
3
. The industry's growing power consumption is projected to require significant increases in global energy capacity by 20303
.Source: The New York Times
Related Stories
While some industry leaders predict the imminent arrival of Artificial General Intelligence (AGI), many researchers remain skeptical. A survey by the Association for the Advancement of Artificial Intelligence found that over 75% of respondents doubted current methods would lead to AGI
5
. This highlights the ongoing debate about the pace and direction of AI development.Source: The Atlantic
The concentration of AI expertise in private companies raises concerns about potential conflicts of interest in research. With nearly 70% of AI Ph.D. graduates joining the private sector as of 2020, there are worries about the independence of AI research and the ability to critically examine industry practices
3
.As the AI landscape continues to evolve rapidly, the tension between innovation, regulation, and global competition remains at the forefront of policy discussions. The industry's shift from welcoming regulation to resisting it reflects the high stakes and complex challenges in governing this transformative technology.
Summarized by
Navi
[2]
[3]
[4]
[5]