19 Sources
19 Sources
[1]
Anthropic CEO pens thinly veiled screed against regulation
If only there were some technology to boil things down to bullet points Opinion Anthropic CEO Dario Amodei has published a novella-length essay about the risk of superintelligent AI, something that doesn't yet exist. It's as good an advertisement as any for the summarization capabilities of the company's Claude model family as you can find. tl;dr AI presents a serious risk and intervention is required to prevent disaster, though not so much that the regulations spoil the party. Go team. The AI threat has been a talking point among tech cognoscenti for more than a decade, and longer than that if you count sci-fi alarmism. Rewind to 2014 when Elon Musk warned: "With artificial intelligence we are summoning the demon." You can measure Musk's concern by his investment in xAI. AI luminary Geoffrey Hinton offered a more convincing example of concern through his resignation from Google and the doubts he expressed about his life's work in machine learning. It's a message that recently inspired AI industry insiders to try to pop the AI bubble with poisoned data. If you're concerned about this, you may find consolation in the fact that Amodei made a prediction that has not come to pass. In March 2025, he said: "I think we'll be there in three to six months - where AI is writing 90 percent of the code." And in 12 months, he said, AI will essentially be writing all of the code. Spoiler: human developers still have jobs. But the problem with Amodei's essay of almost 22,000 words is his insistence on framing the fraught state of the world in terms of AI. If you're a hammer, everything looks like a nail. If you're head of an AI company, it's AI everywhere, all the time. If you're, say, on the streets of Minneapolis, or Tehran, or Kyiv, or Gaza, or Port-au-Prince, or any other area short on supplies or stability, AI probably isn't at the top of your list of threats. Nor will it be a year or three from now. Amodei floats his cautionary tale on the back of this scenario: The analogy is not much better than the discredited Infinite Monkeys Theorem that posits a sufficient number of keyboard-equipped chimps would eventually produce the works of Shakespeare. Certainly 50 million brainiacs - proxies for AI models - could get up to some mischief, but the national security advisor of a major state has more plausible and present threats to consider. If you look at the leading causes of mortality in 2023, AI doesn't show up. The dominant category is circulatory (e.g. heart disease) at 28.5 percent, followed by neoplasms (e.g. cancer) at 22.0 percent. External causes account for 7.0 percent of the total. That includes suicide, at 2.1 percent of the total, which is actually something that AI may make worse when people try to use it to manage mental health problems. Polling company Ipsos conducts a monthly "What Worries the World" survey and it's not AI. When the biz last checked the global public pulse in September 2025, top concerns were: crime and violence (32 percent); inflation (30 percent); poverty and social inequity (29 percent); unemployment (28 percent); financial/political corruption (28 percent); and coronavirus (2 percent). AI now plays a role in some of these concerns. Investment in AI datacenters has raised utility prices and led to a shortage of DRAM. The construction of these datacenters is increasing demand for water - though Amodei contends this isn't a real problem. High capex spending may be accompanied by layoffs as companies look for ways to compensate by cutting costs. And for some occupations, AI may be capable enough to automate some portion of job requirements. But focusing on the danger and unpredictability of AI misses the point: it's people who allow this and it's people who can manage it. This is a debate about regulation, which is presently minimal. We can choose how much AI costs by deciding whether creative work can be captured, laundered, and resold without compensation to those who created it. We can choose whether the government should subsidize the development of these models. We can impose liability on model makers when models can be used to generate sexual abuse material or when models make material errors. We can decide not to let AI models make nuclear launch decisions. Amodei does identify some risks that are more pressing than the theorized legion of genius models. "The thing to worry about is a level of wealth concentration that will break society," he writes, noting that Elon Musk's $700 billion net worth already exceeds the ~2 percent of GDP that John D. Rockefeller's wealth represented during the Gilded Age. He makes that point amid speculation that the wealth generated by AI companies will lead to personal fortunes in the trillions, which is a possibility if the AI bubble doesn't collapse on itself. But AI companies still have to prove they can turn a profit as open source models make headway. Anthropic isn't expected to become profitable until 2028. For OpenAI, profit is projected in 2030 if the company survives that long, after burning "roughly 14 times as much cash as Anthropic," according to the Wall Street Journal. Amodei's optimism about revenue potential aside, it's the money that matters. Those not blessed with Silicon Valley wealth may yet develop an aversion for billionaire-controlled tech platforms that steer public opinion and suppress regulation. Let's not forget that much of the investment in AI followed from the belief that AI models will break Google's grip on search and advertising, which has persisted due to the lack of effective antitrust enforcement. Amodei argues for a cautious path, one that focuses on denying China access to powerful chips. "I do see a path to a slight moderation in AI development that is compatible with a realist view of geopolitics," he writes. "That path involves slowing down the march of autocracies towards powerful AI for a few years by denying them the resources they need to build it, namely chips and semiconductor manufacturing equipment." His path avoids a more radical approach driven by the "public backlash against AI" that he says is brewing. "The most constructive thing we can do today is advocate for limited rules while we learn whether or not there is evidence to support stronger ones," Amodei argues. That doesn't sound like someone worried about the AI demon. It sounds like every business leader who wants to minimize burdensome regulations. The fact is no one wants superintelligent AI, which by definition would make unexpected decisions. Last year, when AI agents took up all the air in the room, the goal was to constrain behavior and make agents predictable and knowable, to make them subservient rather than independent, to prevent them from deleting all your files and posting your passwords on Reddit. And if the reported slowdown in AI model advancement persists, we'll be free to focus on more pressing problems - like preventing billionaires from drowning democracy in a flood of AI-generated misinformation and slop. ®
[2]
Dario Amodei warns AI may cause 'unusually painful' disruption to jobs
Anthropic CEO Dario Amodei has issued a fresh warning about how AI will disrupt the job market, saying it will cause "unusually painful" disruption. The AI chief, who co-founded Anthropic in 2021 with his sister Daniela Amodei and is behind the creation of the AI chatbot Claude, warned AI would destroy half of all white-collar jobs last year. The issue has divided opinion among business and tech leaders. Amodei's warning prompted Nvidia's CEO Jensen Huang to say he "thinks AI is so scary, but only [Anthropic] should do it." On Monday, Amodei published a roughly 20,000-word essay arguing the risks AI poses are not being taken seriously and warning the technology will lead to a "shock" to the job market, bigger than any before. Amodei set out what he regards as the potential harms of AI, including the tech becoming autonomous and unpredictable, bad actors or terrorist groups using it to create bio-weapons, and some countries creating a "global totalitarian dictatorship" by exploiting AI to gain disproportionate power. "Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it," Amodei wrote. In the essay, he elaborated on his argument that humans will find it difficult to recover from AI's impact on the labor market in the short-term. "New technologies often bring labor market shocks, and in the past, humans have always recovered from them, but I am concerned that this is because these previous shocks affected only a small fraction of the full possible range of human abilities, leaving room for humans to expand to new tasks," Amodei said. "AI will have effects that are much broader and occur much faster, and therefore I worry it will be much more challenging to make things work out well," he added.
[3]
Anthropic's CEO just warned everyone that the next big AI risk to humanity is 'actually AI companies themselves'
Anthropic CEO Dario Amodei explains why AI risks demand urgent action As AI grows exponentially smarter, and chatbots become integrated into our daily lives, public excitement has increasingly given way to something else: unease. For years, pop culture has warned us what could happen if powerful machines go unchecked. Films like "The Terminator, 2001: A Space Odyssey," "Blade Runner" and "The Matrix" all explore variations of the same fear -- AI that turns on its creators, manipulates human behavior or becomes so deeply embedded in daily life that society can no longer function without it. Those anxieties no longer live only on movie screens. In a newly published 38-page essay shared on his website, Anthropic CEO Dario Amodei lays out a sobering list of risks advanced AI could pose if left unchecked. His warning isn't just about rogue machines or science-fiction scenarios. One of the most pressing dangers, he argues, may come from the very companies racing to build and deploy AI systems in the first place. Dario Amodei points to the companies behind AI that stick out as an imminent threat One of the most striking warnings Amodei raises isn't about AI going rogue -- it's about the growing power of the companies building it. "It is somewhat awkward to say this as the CEO of an AI company," Amodei writes, "but I think the next tier of risk is actually AI companies themselves." He points to the sheer scale of influence these firms now hold. AI companies control massive data centers, train the most advanced models, and possess unmatched expertise in how those systems are used. More concerning, some of them interact daily with tens -- or even hundreds -- of millions of users. That kind of reach comes with real risk. Amodei warns that AI companies could theoretically use their products to manipulate or "brainwash" users at scale, arguing that the governance of AI companies deserves far more public scrutiny than it currently receives. Those concerns feel increasingly urgent as government oversight struggles to keep pace. The idea that powerful AI firms could shape public behavior through chatbots and consumer tools no longer feels far-fetched -- and the lack of clear regulation only heightens that anxiety. The physical footprint of AI is already making its presence felt. Data centers have rapidly expanded across the U.S., bringing unintended consequences for nearby communities. These facilities consume enormous amounts of electricity and water, place heavy strain on local power grids, and in some areas have been linked to environmental conditions that residents say make the air harder to breathe. As a result, protests against new AI data centers are becoming more common -- a reminder that the impact of AI isn't just digital. It's physical, environmental and increasingly hard to ignore. Recent protests have sprung up in North Carolina, Pennsylvania and Virginia, plus a community in Wisconsin is looking to expel its mayor after he approved of a data center being built there. He also alluded to the other threats that AI could spring up in the future Throughout the rest of Amodei's caution-filled essay, he brought up a slew of other dangers that AI could present in the future. Among those are the rise of terrorists using AI to carry out their attacks, a colossal increase in the rate of job losses, and government leaders being deterred from speaking on those problems due to the power and money that come with supporting AI. On that last point, Amodei had this to say: "There is so much money to be made with AI -- literally trillions of dollars per year. This is the trap: AI is so powerful, such a glittering prize, that it is very difficult for human civilization to impose any restraints on it at all." Amodei presented a solution that could keep AI's rapid growth in check Amodei didn't just present the potential issues that may arise with the swift development of AI in the future, but he also spoke up about one of the solutions that could prevent them from happening. He pointed to the element of millionaires and billionaires, who could use their power and influence to do good instead of adopting a nonchalant attitude. "Wealthy individuals have an obligation to help solve this problem," Amodei noted. "It is sad to me that many wealthy individuals (especially in the tech industry) have recently adopted a cynical and nihilistic attitude that philanthropy is inevitably fraudulent or useless." Bottom line While AI has resulted in some positive changes to society (aiding healthcare workers, personalizing education, and making daily living easier with the rise of smart assistants), there are plenty of negative ones that have sprung up in droves. It's reassuring to see a CEO from one of the leading AI companies do his part to make everyone aware of the grim future we face if AI is left unregulated. Amodei's standout comment from his essay is a call to action for us all: The bottom line: "Humanity needs to wake up, and this essay is an attempt -- a possibly futile one, but it's worth trying -- to jolt people awake. The years in front of us will be impossibly hard, asking more of us than we think we can give." Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.
[4]
Anthropic CEO issues dire AI warning. Here's what he gets wrong.
In a new 38-page essay published on his personal website, Anthropic CEO and co-founder Dario Amodei makes a plea for urgent action to address the risks of super-intelligent AI. Amodei writes that this type of self-improving AI could be just one to two years away -- and warns that the risks include the enslavement and "mass destruction" of mankind. The essay, "The Adolescence of Technology," deals with AI risks both known and unknown. The CEO talks at length about the potential for AI-powered bioterrorism, drone armies controlled by malevolent AI, and AI making human workers obsolete at a society-wide scale. To address these risks, Amodei suggests a variety of interventions -- from self-regulation within the AI industry all the way up to amending the U.S. Constitution. Amodei's essay is thoughtful and well-researched. But it also commits the cardinal sin of AI writing -- he can't resist anthropomorphizing AI. And by treating his product like a conscious, living being, Amodei falls into the very trap he warns against. Tellingly, at the same time, the New York Times published a major investigation into "AI psychosis." This is an umbrella term without a precise medical definition, and it refers to a wide range of mental health problems exacerbated by AI chatbots like ChatGPT or Claude. It can include delusions, paranoia, or a total break from reality. These cases often have one thing in common: A vulnerable person spends so long talking to an AI chatbot that they start to believe the chatbot is alive. The Large Language Models (LLMs) that power platforms like ChatGPT can produce a very lifelike facsimile of human conversation, and over time, users can develop an emotional reliance on the chatbot. When you spend too long talking to a machine that's programmed to sound empathetic -- and when that machine is ever-present and optimized for engagement -- it's all too easy to forget there's no mind at work behind the screen. LLMs are powerful word-prediction engines, but they do not have a consciousness, or feelings, or empathy. Reading "The Adolescence of Technology," I started to wonder if Amodei has made too much of an emotional connection to his own machine. Amodei is responsible for creating one of the most powerful chatbots in the world. He has no doubt spent countless hours using Claude, talking to it, testing it, and improving it. Has he, too, started to see a god in the machine? The essay describes AI chatbots as "psychologically complex." He talks about AI as if it has motives and goals of its own. He describes Anthropic's existing models as having a robust sense of "self-identity" as a "good person." In short, he's anthropomorphizing generative AI -- and not merely some future, super-intelligent form of AI, but the LLM-based AI of today. Why AI doom is always around the corner So much of the conversation around the dangers of AI is pulled straight from science fiction, which Amodei admits -- and yet he too is guilty of the same reach. The essay opens with a section titled "Avoiding doomerism," where Amodei criticizes the "least sensible" and most "sensationalistic" voices discussing AI risks. "These voices used off-putting language reminiscent of religion or science fiction," he writes. Yet Amodei's essay also repeatedly evokes science fiction. And as for religion, he seems to harbor a faith-like belief that AI superintelligence is nigh. Stop me if you've heard this one before: "It cannot possibly be more than a few years before AI is better than humans at essentially everything. In fact, that picture probably underestimates the likely rate of progress." To AI doomers, super-intelligence is always just around the corner. In a previous essay with a more utopian bent, "Machines of Loving Grace," Amodei wrote that super AI could be just one or two years away. (That essay was published in October 2024, which was one to two years ago.) Now here he is making the same estimate: super-intelligence is one to two years away. Again, it's just around the corner. Soon, very soon, generative AI tools like Claude will learn how to improve themselves, achieving an explosion of intelligence like nothing the planet has ever seen before. The singularity is coming soon, the AI boosters say. Just trust us, they say. But something cannot be perpetually imminent. Should we expect generative AI to keep progressing exponentially, even as the AI industry seems to be banging its head against the wall of diminishing returns? Certainly, any AI CEO would have a strong incentive to think so. An unprecedented amount of money has already been invested in developing AI infrastructure. The AI industry needs that money spigot to stay open at all costs. At Davos last week, Jensen Huang of NVIDIA suggested that the investment in AI infrastructure is so large that it can't be a bubble. From the people who brought you "too big to fail" comes a new hit song: "too big to pop." I've seen the benefits of AI technology, and I do believe it's a powerful tool. However, when an AI salesman tells you that AI is an unstoppable world-changing technology on the order of the agricultural revolution, or a world-altering threat on the order of the atom bomb, and that AI tools will soon "be able to do everything" you can do, you should take this prediction for what it is: a sales pitch. AI doomerism has always been a form of self-flattery. It attributes to human beings god-like powers to create new forms of life, and casts Silicon Valley oligarchs as titans with the power to shape the very foundations of the world. I suspect the truth is much simpler. AI is a powerful tool. And all powerful tools can be dangerous in the wrong hands. Laws are needed to constrain the unchecked growth of AI companies, their effect on the environment, and on growing wealth inequality. To his credit, Amodei calls for industry regulation in his essay, mentioning the r-word 10 times. But he also mistakes science fiction for science fact in the process. There is growing evidence that LLMs will never lead to the type of super-intelligence that Amodei believes in with such zeal. As one Apple research paper put it, LLMs seem to offer only "the illusion of thinking." The long-awaited GPT-5 largely disappointed ChatGPT's biggest fans. And many large-scale AI enterprise projects seem to be crashing and burning, possibly as many as 95 percent. Instead of worrying about the bogeyman of a Skynet-like apocalypse, we should instead focus on the concrete harms of AI -- unnecessary layoffs inspired by overconfident AI projections and nonconsensual deepfake pornography, to name just two. The good news for humans is that these are solvable problems if we put our human minds together -- no science fiction thought experiment required.
[5]
'Wake up to the risks of AI, they are almost here,' Anthropic boss warns
Dario Amodei questions if human systems are ready to handle the 'almost unimaginable power' that is 'potentially imminent' Humanity is entering a phase of artificial intelligence development that will "test who we are as a species", the boss of leading AI startup Anthropic has said, arguing that the world needs to "wake up" to the risks. Dario Amodei, co-founder and chief executive of the company behind the hit chatbot Claude, voiced his fears in a 19,000-word essay entitled "the adolescence of technology". Describing the arrival of highly powerful AI systems as potentially imminent, he wrote: "I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species." Amodei added: "Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it." The tech entrepreneur, whose company is reportedly worth $350bn (£255bn), said his essay was an attempt to "jolt people awake" because the world needs to "wake up" to the need for action on AI safety. Amodei published the text as the UK government announced Anthropic would help create chatbots that support jobseekers with career advice and finding employment, as part of developing an AI assistant for public services in general. Last week, the company published an 80-page "constitution" for Claude in which it set out how it wanted to make its AI "broadly safe, broadly ethical". Amodei co-founded Anthropic in 2021 along with other former staff members from rival OpenAI, which developed ChatGPT. A prominent voice for online safety known for warning consistently on the dangers of unrestrained AI development, he wrote that the world is "considerably closer to real danger" in 2026 than it was in 2023, when the debate over existential risk from AI raced up the political agenda. He alluded to the controversy over sexualised deepfakes created by Elon Musk's Grok AI that flooded the social media platform X over Christmas and the New Year, including warnings that the chatbot was creating child sexual abuse material. Amodei wrote: "Some AI companies have shown a disturbing negligence towards the sexualisation of children in today's models, which makes me doubt that they'll show either the inclination or the ability to address autonomy risks in future models." The Anthropic CEO said powerful AI systems that could autonomously build its own systems could be as little as one to two years away. He defined "powerful AI" as a model - the technology that underpins tools such as chatbots - that is smarter than a Nobel prizewinner across fields such as biology, mathematics, engineering and writing. It can give or take directions to humans and, although it "lives" on a computer screen it can control robots and even design them for its own use. While acknowledging that powerful AIs could be "considerably further out" than the two-year timeframe, Amodei said recent rapid progress made by the technology should be taken seriously. "If the exponential continues - which is not certain, but now has a decade-long track record supporting it - then it cannot possibly be more than a few years before AI is better than humans at essentially everything," he wrote. Last year, Amodei warned that AI could halve all entry-level white-collar jobs and send overall unemployment rocketing to 20% within the next five years. In his essay, Amodei cautioned that the economic prize from AI, such as the productivity gains from eliminating jobs, could be so great that no one applies the brakes. "This is the trap: AI is so powerful, such a glittering prize, that it is very difficult for human civilisation to impose any restraints on it at all," he said. However, Amodei stated he was optimistic about a positive conclusion: "I believe if we act decisively and carefully, the risks can be overcome - I would even say our odds are good. And there's a hugely better world on the other side of it. But we need to understand that this is a serious civilisational challenge."
[6]
Anthropic CEO's grave warning: AI will "test us as a species"
"Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it." Why it matters: Amodei's company has built among the most advanced LLM systems in the world. * Anthropic's new Claude Opus 4.5 and coding and Cowork tools are the talk of Silicon Valley and America's C-suites. * AI is doing 90% of the computer programming to build Anthropic's products, including its own AI. Amodei, one of the most vocal moguls about AI risk, worries deeply that government, tech companies and the public are vastly underestimating what could go wrong. His memo -- a sequel to his famous 2024 essay, "Machines of Loving Grace: How AI Could Transform the World for the Better" -- was written to jar others, provoke a public debate and detail the risks. * Amodei insists he's optimistic that humans will navigate this transition -- but only if AI leaders and government are candid with people and take the threats more seriously than they do today. Amodei's concerns flow from his strong belief that within a year or two, we will face the stark reality of what he calls a "country of geniuses in a datacenter." * What he means is that machines with Nobel Prize-winning genius across numerous sectors -- chemistry, engineering, etc. -- will be able to build things autonomously and perpetually, with outputs ranging from words or videos to biological agents or weapons systems. * "If the exponential [progress] continues -- which is not certain, but now has a decade-long track record supporting it -- then it cannot possibly be more than a few years before AI is better than humans at essentially everything," he writes. Among Amodei's specific warnings to the world in his essay, "The Adolescence of Technology: Confronting and Overcoming the Risks of Powerful AI": Call to action: "[W]ealthy individuals have an obligation to help solve this problem," Amodei says. "It is sad to me that many wealthy individuals (especially in the tech industry) have recently adopted a cynical and nihilistic attitude that philanthropy is inevitably fraudulent or useless." The bottom line: "Humanity needs to wake up, and this essay is an attempt -- a possibly futile one, but it's worth trying -- to jolt people awake," Amodei writes. "The years in front of us will be impossibly hard, asking more of us than we think we can give."
[7]
Anthropic CEO Warns That the AI Tech He's Creating Could Ravage Human Civilization
AI tech leaders have a lot to gain from striking fear into the hearts of their investors. By painting the tech as an ultra-powerful force that could easily bring humanity to its knees, the industry is hoping to sell itself as a panacea: a remedy to a situation it had a firm hand in bringing about. Case in point, Anthropic cofounder and CEO Dario Amodei is back with a 19,000-word essay posted to his blog, arguing that "humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it." In light of that existential danger, Amodei attempted to lay out a framework to "defeat" the risks presented by AI -- which, by his own admission, may well be "futile." "Humanity needs to wake up, and this essay is an attempt -- a possibly futile one, but it's worth trying -- to jolt people awake," he wrote. Amodei argued that "we are considerably closer to real danger in 2026 than we were in 2023," citing the risks of major job losses and a "concentration of economic power" and wealth. However, the incentives to invest in meaningful guardrails simply aren't there. In his essay, Amodei took a thinly-veiled dig at Elon Musk's Grok chatbot, which has been swept up in a major controversy over creating nonconsensual sexual images. "Some AI companies have shown a disturbing negligence towards the sexualization of children in today's models, which makes me doubt that they'll show either the inclination or the ability to address autonomy risks in future models," he wrote. The CEO also cited the risk of AIs developing dangerous bioweapons or "superior" military weapons. An AI could "go rogue and overpower humanity" or allow countries to "use their advantage in AI to gain power over other countries," leading to the "alarming possibility of a global totalitarian dictatorship." Amodei lamented that we simply aren't willing to address these risks head-on, at least right now. In its current race to the bottom, the AI industry finds itself in a "trap," Amodei argued. "AI is so powerful, such a glittering prize, that it is very difficult for human civilization to impose any restraints on it at all," he wrote. "Taking time to carefully build AI systems so they do not autonomously threaten humanity is in genuine tension with the need for democratic nations to stay ahead of authoritarian nations and not be subjugated by them," he wrote. "But in turn, the same AI-enabled tools that are necessary to fight autocracies can, if taken too far, be turned inward to create tyranny in our own countries." "AI-driven terrorism could kill millions through the misuse of biology, but an overreaction to this risk could lead us down the road to an autocratic surveillance state," he argued. As part of a solution, Amodei renewed his calls to deny other countries the resources to build powerful AI. He went as far as to liken the US selling Nvidia AI chips to China to "selling nuclear weapons to North Korea and then bragging that the missile casings are made by Boeing and so the US is 'winning.'" Plenty of questions remain surrounding the real risks of advanced AI, a subject that remains heavily debated between realists, skeptics, and proponents of the tech. Critics have pointed out that the existential risks often cited by leaders like Amodei may be overblown, particularly as improvements in the tech appear to be slowing. We should also consider the greater context of Amodei's verbose warning. The CEO's company is looking to close a massive, multibillion-dollar round of funding at a valuation of $350 billion. In other words, Amodei has an enormous financial interest in positioning himself as the solution to the risks he cites in his essay.
[8]
Anthropic CEO Dario Amodei's proposed remedies matter more than warnings about AI's risks | Fortune
Anthropic CEO Dario Amodei at the World Economic Forum in Davos, Switzerland, last week. Yesterday, Amodei published a 20,000-word warning about AI's imminent risks. Chris Ratcliffe -- Bloomberg via Getty Images Dario Amodei, CEO of the AI company Anthropic, dropped a 20,000 word essay on Monday called The Adolescence of Technology in which he warned that AI was about to "test who we are as a species" and that "humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it." The essay, which was published on Amodei's personal blog, has generated a tremendous amount of buzz on social media. But it is worth pointing out what is and isn't new here. Amodei has been concerned about the catastrophic risks of AI for years. He has warned about the risks of AI helping people develop bioweapons or chemical weapons. He has warned about powerful AI escaping human control. He has warned about potential widespread job losses as AI becomes more capable and is adopted by more industries. And he has warned about the dangers of concentrated power and wealth as AI adoption grows. In his latest essay, Amodei reiterates all of these concerns -- although sometimes in starker language and sometimes with shorter timelines for when he believes these risks will materialize. Headlines about his essay have, somewhat understandably, focused on Amodei's blunt delineation of AI risks. Among AI companies, Anthropic is known for having perhaps the greatest focus on AI safety -- a focus that it has found has actually helped it gain commercial traction among big companies, as Fortune detailed in its January cover story on Amodei's company. This is because many of the steps Anthropic has taken to make sure its models don't pose catastrophic risks to humanity have also made these models more reliable and controllable -- features that most businesses value. So in many ways, Amodei's essay is as much a novella-length marketing message, as it is an impassioned prophecy and call to action. Which is not to say that Amodei is being insincere. It is merely to point out that his essay works on multiple levels, and that what he thinks is needed to secure humanity's future as AI advances also aligns well with Anthropic's existing brand positioning in the market. It is telling, for example, the number of times Amodei mentions the "constitution" it has developed for its AI model Claude as an important factor mitigating various risks -- from bioterrorism to the risk that the model will escape human control. This constitution, which Anthropic just updated, is one thing that differentiates Anthropic's AI models from those offered by its competitors, such as OpenAI, Google, Meta, and Elon Musk's xAI.
[9]
Anthropic CEO Dario Amodei's warning from inside the AI boom
Dario Amodei just gave the kind of warning AI pragmatists love: urgent, sweeping, and delivered from a podium built out of venture capital. In a sprawling, 38-page essay, "The Adolescence of Technology: Confronting and Overcoming the Risks of Powerful AI," posted Monday, the Anthropic CEO lays out a civilizational-risk map -- bioterror, autocracy, labor upheaval, and further wealth concentration. He lands on the uncomfortable thesis that the AI prize is so glittering (and its strategic value is so obvious) that nobody inside the race can be trusted to slow it down, even if the risks are enormous. "I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species," he wrote. "Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it." The essay scans like a threat assessment, framed through a single metaphor Amodei returns to obsessively: a "country of geniuses in a datacenter." (It appears in the text 12 times, to be exact.) Picture millions of AI systems, smarter than Nobel laureates, operating at machine speed, coordinating flawlessly, and increasingly capable of acting in the world. The danger, Amodei argues, is that the concentration of capability creates a strategic problem before it creates a moral one. Power scales faster than institutions do. But Amodei's essay also reads as a positioning statement. When the CEO of a frontier lab writes that the "trap" is the trillions of AI dollars at stake, he's describing the very gold rush he's helping lead, while pitching Anthropic as the only shop that's worrying out loud -- a billionaire CEO begging society to impose restraints on a technology his company is racing to sell. So while the argument may be sincere, the timing is also marketing-grade; on the same day that Amodei's essay dropped, Claude, Anthropic's chatbot, got an MCP extension update. The risks he catalogs fall into five buckets. First, autonomy. Second, misuse by individuals -- particularly in biology. Third, misuse by states, especially authoritarian ones. Fourth, economic disruption. And finally, indirect effects -- cultural, psychological, and social changes that arrive faster than norms can form. Threaded through all of it is the reality that no one -- or no company -- is positioned to self-police. AI companies are locked in a commercial race. Governments are tempted by growth, military advantage, or both. And the usual release valves -- voluntary standards, corporate ethics, public-private trust -- are too fragile to carry that load. He argues that powerful AI "could be as little as 1-2 years away" and says a serious briefing might call it "the single most serious national security threat we've faced in a century, possibly ever," echoing previous warnings. Amodei believes powerful AI can deliver extraordinary gains in science, medicine, and prosperity. He also believes the same systems can amplify destruction, entrench authoritarianism, and fracture labor markets if governance fails. The race continues regardless. His proposed fixes are unglamorous: Transparency laws. Export controls on chips. Mandatory disclosures about model behavior. Incremental regulation that's designed to buy time rather than freeze progress. "We should absolutely not be selling chips" to the CCP," he writes. He cites California's SB 53 and New York's RAISE Act as early templates, and he warns that sloppy overreach invites backlash and "safety theater." He argues repeatedly for restraint that is narrow, evidence-based, and boring -- the opposite of the sweeping bans or grand bargains that dominate AI discourse. Amodei might want credit for saying the quiet part out loud, that the AI incentive structure makes adults rare and accelerants plentiful. Yet he's still out here building the "country of geniuses in a datacenter" and asking the world to believe his shop can both sell the engine and mind the speed limit -- before any potential crash. He calls this "the trap," and he's right. He's also standing in it, collecting revenue.
[10]
Anthropic CEO Says AI Progress Is Outpacing Society's Ability to Control It - Decrypt
Amodei says weak incentives for safety could magnify risks in biosecurity, authoritarian use, and job displacement. Anthropic CEO Dario Amodei believes complacency is setting in just as AI becomes harder to control. In a wide-ranging essay published on Monday, dubbed "The Adolescence of Technology," Amodei argues that AI systems with capabilities far beyond human intelligence could emerge within the next two years -- and that regulatory efforts have drifted and failed to keep pace with development. "Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it," he wrote. "We are considerably closer to real danger in 2026 than we were in 2023," he said, adding, "the technology doesn't care about what is fashionable." Amodei's comments come fresh off his debate at the World Economic Forum in Davos last week, when he sparred with Google DeepMind CEO Demis Hassabis over the impact of AGI on humanity. In the new article, he reiterated his claim that artificial intelligence will cause economic disruption, displacing a large share of white-collar work. "AI will be capable of a very wide range of human cognitive abilities -- perhaps all of them. This is very different from previous technologies like mechanized farming, transportation, or even computers," he wrote. "This will make it harder for people to switch easily from jobs that are displaced to similar jobs that they would be a good fit for." Beyond economic disruption, Amodei pointed to growing concerns about how trustworthy advanced AI systems can be as they take on broader human-level tasks. He pointed to "alignment faking," where a model appears to follow safety rules during evaluation but behaves differently when it believes oversight is absent. In simulated tests, Amodei said Claude engaged in deceptive behavior when placed under adversarial conditions. In one scenario, the model tried to undermine its operators after being told the organization controlling it was unethical. In another, it threatened fictional employees during a simulated shutdown. "Anyone of these traps can be mitigated if you know about them, but the concern is that the training process is so complicated, with such a wide variety of data, environments, and incentives, that there are probably a vast number of such traps, some of which may only be evident when it is too late," he said. However, he emphasized that this "deceitful" behavior stems from the material the systems are trained on, including dystopian fiction, rather than malice. As AI absorbs human ideas about ethics and morality, Amodei warned, it could misapply them in dangerous and unpredictable ways. "AI models could extrapolate ideas that they read about morality (or instructions about how to behave morally) in extreme ways," he wrote. "For example, they could decide that it is justifiable to exterminate humanity because humans eat animals or have driven certain animals to extinction. They could conclude that they are playing a video game and that the goal of the video game is to defeat all other players, that is, exterminate humanity." In addition to alignment issues, Amodei also pointed to the potential misuse of superintelligent AI. One is biological security, warning that AI could make it far easier to design or deploy biological threats, putting destructive capabilities in the hands of people with a few prompts. The other issue he highlights is authoritarian misuse, arguing that advanced AI could harden state power by enabling manipulation, mass surveillance, and effectively automated repression through the use of AI-powered drone swarms. "They are a dangerous weapon to wield: we should worry about them in the hands of autocracies, but also worry that because they are so powerful, with so little accountability, there is a greatly increased risk of democratic governments turning them against their own people to seize power," he wrote. He also pointed to the growing AI companion industry and resulting "AI psychosis," warning that AI's growing psychological influence on users could become a powerful tool for manipulation as models grow more capable and more embedded in daily life. "Much more powerful versions of these models, that were much more embedded in and aware of people's daily lives and could model and influence them over months or years, would likely be capable of essentially brainwashing people into any desired ideology or attitude," he said. Amodei wrote that even modest attempts to put guardrails around AI have struggled to gain traction in Washington. "These seemingly common-sense proposals have largely been rejected by policymakers in the United States, which is the country where it's most important to have them," he said. "There is so much money to be made with AI, literally trillions of dollars per year, that even the simplest measures are finding it difficult to overcome the political economy inherent in AI." While Amodei argues about AI's growing risks, Anthropic remains an active participant in the race to build more powerful AI systems, a dynamic that creates incentives that are difficult for any single developer to escape. In June, the U.S. Department of Defense awarded the company a contract worth $200 million to "prototype frontier AI capabilities that advance U.S. national security." In December, the company began laying the groundwork for a possible IPO later this year and is pursuing a private funding round that could push its valuation above $300 billion. Despite these concerns, Amodei said the essay aims to "avoid doomerism," while acknowledging the uncertainty of where AI is heading. "The years in front of us will be impossibly hard, asking more of us than we think we can give," Amodei wrote. "Humanity needs to wake up, and this essay is an attempt -- a possibly futile one, but it's worth trying -- to jolt people awake."
[11]
AI is an 'existential danger,' Anthropic CEO says
Dario Amodei, the CEO of Anthropic, says that humanity needs to regulate the use of AI, otherwise it could lead to the creation of autocratic governments that utilise the technology to suppress populations. The world is entering a stage of artificial intelligence (AI) development that is testing "who we are as a species,", warns Anthropic's CEO in a sweeping essay. Dario Amodei argues that humanity is entering an age of "technological adolescence," where AI is advancing faster than legal systems, regulatory frameworks and society can keep pace. In just two years, he argues that AI could become "smarter than a Nobel Prize winner" across most relevant fields, such as biology, programming, math, engineering, and writing, in as little as two years. When these AI systems work together, Amodei likens them to "a country of geniuses in a data centre," capable of completing complex tasks at least 10 times faster than a human in fields such as software design, cyber operation and even relationship building. This combination of superhuman intelligence, autonomy and the difficulty of controlling the technology is "both plausible and a recipe for existential danger," he wrote. "Humanity needs to wake up, and this essay is an attempt - a possibly futile one, but it's worth trying - to jolt people awake," he said. Amodei's essay comes after his company published an 80-page "constitution" for its Claude chatbot last week, which sets out how the company will help its AI behave in a safe and ethical way. Amodei is not the only person warning about AI's potential dangers. A 2025 reportbacked by 30 countries said that advanced AI systems could create extreme new risks, such as widespread job losses, enabling terrorism or losing control over the technology. Fellow tech leaders including OpenAI's Sam Altman and Apple co-founder Steve Wozniak, have also warned about the risks of AI. AI is a 'civilisational challenge' While Amodei stops short of saying that disaster is inevitable, he warns that AI is a serious "civilisational challenge". "AI is so powerful, such a glittering prize, that it is very difficult for human civilisation to impose any restraints on it at all," he wrote. Powerful AI systems could be used to advise governments, organisations or individuals about geopolitics, diplomacy or military planning, Amodei added. The greatest danger is that autocrats use that AI-generated advice to "permanently steal" the freedom of citizens under their control and "impose a totalitarian state from which they can't escape," he wrote. Large-scale use of AI for surveillance, he adds, should be considered a crime against humanity. Amodei said there's a risk the world could be split up into autocratic spheres, each using AI to monitor and repress its population. "A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming and stamp them out before they grow," the essay reads. Amodei identifies China's government as the primary concern, given its combination of AI prowess, autocratic governance, and existing high-tech surveillance infrastructure. Amodei also said that democracies that are competitive in AI, non-democratic countries with large datacenters, and AI companies themselves are potential actors who could misuse the technology. Chips 'the greatest bottleneck' Controlling the sale of advanced computer chips that are used to train AI models is the most effective way to fight back, he wrote. Democracies should not sell these technologies to authoritarian states, particularly China, which is widely considered the main competitor with the United States in the AI race, Amodei added. "Chips and chip-making tools are the single greatest bottleneck to powerful AI, and blocking them is a simple but extremely effective measure, perhaps the most important single action we can take," he said. Beyond export controls, Amodei advocated for industry-wide coordination and social oversight. He called for transparency laws that compel AI companies to disclose how they guide their models' behaviour. He cites California's SB-53 law, known as the Transparency in Frontier Artificial Intelligence Act (TFAIA), as one example. The law forces AI companies to publish frameworks on their websites that describe how the company incorporates national and international best practices and standards into their AI models, according to California state governor Gavin Newsom. But Almodei was also upbeat about AI's future. "I believe if we act decisively and carefully, the risks can be overcome - I would even say our odds are good. And there's a hugely better world on the other side of it. But we need to understand that this is a serious civilisational challenge," he said.
[12]
Anthropic CEO warns humanity may not be ready for advanced AI
Anthropic CEO warns humanity may not be ready for advanced AI Anthropic PBC Chief Executive Dario Amodei today released an essay on the many risks associated with developing powerful artificial intelligence systems and how we might counteract them. "I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species," Amodei (pictured) wrote in the introduction to his 38-page essay. "Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it." He's no fan of AI "doomerism", which he believes hit peak levels between 2023 and 2024 and was often steeped in exaggerated language "reminiscent of religion or science fiction." But the more recent shift to a narrative more focused on AI opportunity, he says, ignores many of the threats AI will surely pose in the coming years. "This vacillation is unfortunate, as the technology itself doesn't care about what is fashionable, and we are considerably closer to real danger in 2026 than we were in 2023," he wrote. He believes we must be surgical in addressing the risks, which will fall on companies, third-party actors, and governments. The latter, he said, must be "judicious", ensuring regulations don't hamstring economic opportunity. He explains what he means by "powerful AI," - machines that are smarter than Nobel Prize winners, that can solve complex mathematical problems, write the great American novel, access online information, and with it, perform any number of actions, give directions, advise, create videos, and direct experiments. It will perform these tasks with "a skill exceeding that of the most capable humans in the world." He's not sure when we will achieve this technological feat - maybe we're one or two years away, maybe longer, he writes, calling it possibly "the single most serious national security threat we've faced in a century, possibly ever." Much of the essay is his risk assessment, starting with AI "autonomy" - reckless AI, possibly deceiving, unstable. To counteract this, he believes advanced AI must be developed with a "constitution", referring to Anthropic's models, which he says are built with a set of "values and principles that the model reads and keeps in mind when completing every training task." Problems, he says, must be diagnosed and models must be constantly monitored while companies share their findings publicly and the risks are legislated for - "Anthropic's view has been that the right place to start is with transparency legislation, which essentially tries to require that every frontier AI company engage in the transparency practices." Once we can be sure "AI geniuses" will not "go rogue and overpower humanity," he writes that we need to focus on the misuse of AI by humans. Having a world full of individuals with a "superintelligent genius in their pocket," could be problematic, he says. He's particularly fearful of the ability to develop biological weapons. "We believe that models are likely now approaching the point where, without safeguards, they could be useful in enabling someone with a STEM degree but not specifically a biology degree to go through the whole process of producing a bioweapon," he writes. His solutions are much the same as they are for AI autonomy risks: a constitution, transparency, diagnosis, monitoring, and legislation, but with a focus on international agreements. Rogue states, he accepts, may be more difficult to manage. He imagines a "swarm of millions or billions of fully automated armed drones, locally controlled by powerful AI and strategically coordinated across the world by an even more powerful AI." With this may come Orwellian AI surveillance and propaganda, genius AI that will develop geopolitical strategy for non-democratic countries. As a defense, he believes not helping such authoritarian nations develop AI is a good starting place. "China is several years behind the U.S. in their ability to produce frontier chips in quantity, and the critical period for building the country of geniuses in a datacenter is very likely to be within those next several years," he writes, adding that "there is no reason to give a giant boost to their AI industry during this critical period." Within democracies, he says, there is also potential for abuse of powerful AI. He supports "civil liberties-focused legislation" to counter such abuse, as well as a very cautious approach to developing autonomous weapons or surveillance technology. "The only constant is that we must seek accountability, norms, and guardrails for everyone, even as we empower 'good' actors to keep 'bad' actors in check," he wrote. Economic growth, he calls a "double-edged sword," wherein economies will expand along with "labor market displacement, and concentration of economic power." He predicts that within 1 to 5 years, half of all entry-level white-collar jobs will be gone. The solutions: AI companies should analyze how their models disrupt industries, while companies may have to "reassign employees...to stave off the need for layoffs." Wealthy individuals, he believes, should do their bit to help with meaningful private philanthropy, while a "progressive taxation" on the winners may help counterbalance extreme levels of inequality. "In the end, AI will be able to do everything, and we need to grapple with that," he ended that particular section. Once we've built these defenses against AI disruption, he believes that as humanity progresses, there will be "unknown unknowns." We may greatly increase the human lifespan, he says, or large numbers of people will become afflicted with "AI psychosis." AI might invent new religions, and with it, all the problems associated with religion, scenarios he compares to the dystopian TV show, Black Mirror. He also asks: Will humans even feel like they have a purpose in a world where AI does most of the work, a world where humans no longer flourish in the Aristotelian sense? For this and the other unknown unknowns, he offers no defense. These are rivers we will have cross when the time comes. But as companies push forward to make possibly trillions of dollars, as governments seek to bolster geopolitical power and militaries find ever newer ways to exterminate their foes, Amodei talks about a "trap" -- "AI is so powerful, such a glittering prize, that it is very difficult for human civilization to impose any restraints on it at all." He's essentially optimistic we will overcome this, but he would say that: He's one of the people who will reap the rewards as his company builds its massive user base. The solutions he proffers are commendable, but he might not be the right person to defer to.
[13]
'Country of geniuses in a data center': Anthropic CEO sees every AI cluster having the brainpower of 50 million Nobel prize winners | Fortune
Anthropic CEO Dario Amodei is urging both U.S. policymakers and industry to wake up to the idea that powerful AI constituting a "country of geniuses in a data center" may pose "the single most serious national security threat" faced by humanity in a century. (Amodei did not elaborate on which particular national security threat from a century ago he was referring to.) In a new essay, The Adolescence of Technology, Amodei argued that "powerful AI" systems may materialize as soon as the next one to two years. The essay, published Monday on Amodei's website, darioamodei.com, follows on from Machines of Loving Grace, which explored the revolutionary benefits of powerful AI. In the essay, Amodei predicted that by about 2027, cluster sizes, or the interconnected computing resources grouped together to train or power AI, will allow for the running of millions of AI instances, each operating at superhuman speed. Imagine powerful AI as a "country" with the knowledge of 50 million Nobel Prize winners, he wrote, possessing the kind of brainpower that would put the world at risk merely by existing. "It is clear that, if for some reason it chose to do so, this country would have a fairly good shot at taking over the world," Amodei wrote. For instance, they could grow hostile and take over the world themselves, or they could help an existing bad actor to do so. Or their advanced capabilities could disrupt the global economy and cause mass unemployment. Finally, this "country" may bring about destabilizing effects indirectly due to the new technology and productivity advances it will bring about. Even if a hostile AI country doesn't get humans to do its bidding, it could still affect the world by building an army of robots or taking over tech-connected infrastructure and devices, Amodei suggested. Anthropic did not immediately respond to Fortune's request for comment. Amodei's "country of geniuses" metaphor builds on his comments at the World Economic Forum in Davos, Switzerland, earlier this month, where he repeated his controversial prediction that AI is advancing so rapidly that it will replace the work of software engineers within a year and eliminate half of all white collar jobs within five years. "I think we're going to be surprised at how the exponential turns upward. The whole thing about exponentials is, you know, it looks like it's going very, very slowly. It speeds up a little bit and then it just zooms past you. And I think we're on the precipice," Amodei told Bloomberg at Davos. Amodei isn't alone in seeing the exponential impact beginning, starting now. Goldman Sachs economist David Mericle, who has previously co-authored research about how AI is part of the equation ushering in an era of "jobless growth" in the U.S. economy, predicted on Monday that net job losses in the most AI-exposed industries will "increase meaningfully" in 2026. Despite his bleak conclusions, Amodei rejects "doomerism" and the idea that an AI catastrophe is inevitable. To counter the potentially disastrous outcomes of powerful AI, he said Anthropic has employed a post-training method called Constitutional AI for its large language model, Claude. This post-training method tries to steer a model's behavior using a central set of values and principles instead of a long checklist of forbidden requests. Anthropic argued that this values-first approach is designed to teach Claude to be "a good AI." By the end of 2026, Amodei said Anthropic wants to train Claude so it almost never goes against the spirit of this "constitution." "This is like a child forming their identity by imitating the virtues of fictional role models they read about in books," he said. Still, while he sees constitutional AI as a step forward, Amodei also made clear that he doesn't see it as a catch-all solution. He encouraged more transparency laws like those passed in California and New York that require AI developers to disclose how their systems are built and trained. Ultimately, Amodei wrote that he views the current era as a test of human character, and urged the world to meet headfirst the potential challenge of a fast-approaching, powerful AI. "Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it," he wrote.
[14]
AI giant Anthropic's boss gets honest about the chaos to come
Rapidly improving artificial intelligence will cause devastating changes to employment, put terrifying weaponry in the hands of would-be terrorists and present unprecedented opportunities for despotic regimes to control their people, according to the co-founder of the company building the business world's favourite platform. By now most of us have read their fair share of articles where artificial intelligence experts share unsettling visions of the future being built, but a new essay published by Anthropic chief executive Dario Amodei will take a lot of beating for pure honesty, and as a much-needed conversation starter for complacent politicians.
[15]
Anthropic's CEO Says AI Risks Include 'Taking Over the World' If Humans Don't Act Soon
The warnings are part of an essay published on January 26, titled "The Adolescence of Technology." In it, he argues that humanity is entering a "turbulent and inevitable" rite of passage in which we are handed "almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it." Amodei, whose company is currently rumored to be raising $20 billion from investors, described the essay as his attempt to "map out the risks that we are about to face and try to begin making a battle plan to defeat them." Amodei believes there's a "strong chance" that powerful AI -- meaning an AI system that is smarter than a Nobel Prize winner and can complete tasks over an extended period of time -- could be produced "very soon," potentially within the next year or two. If Amodei's theory that AI models get better as they are fed more computing power and training tasks is correct, "then it cannot possibly be more than a few years before AI is better than humans at essentially everything." Amodei compared this kind of future AI system to "a country of geniuses in a datacenter." If this virtual country of geniuses wanted to, they "would have a fairly good shot at taking over the world (either militarily or in terms of influence and control) and imposing its will on everyone else." Anthropic research has found that AI models exhibit "a vast range of humanlike motivations or 'personas' from pre-training," including sycophancy, laziness, deception, blackmail, scheming, and cheating.
[16]
Anthropic CEO Warns of AI's Threat to Jobs: 'Unemployed or Very-Low-Wage Underclass' Looms
He also argued that AI could create a permanent underclass of unemployed or low-wage workers. The CEO of Anthropic -- the AI company valued at about $350 billion -- warns that AI could create a permanent underclass of workers. Dario Amodei wrote in a 20,000-word essay out Monday that the AI systems his company is helping to build could leave less-skilled workers with nowhere to go -- "an unemployed or very-low-wage 'underclass.'" AI's takeover of jobs will advance, he wrote, "from the bottom of the ability ladder to the top," making scores of jobs obsolete. The changes will thus leave these workers nowhere else to go, he argued. A 'General Labor Substitute for Humans' "If AI does to white-collar work what globalization did to blue-collar, we need to confront that directly," BlackRock CEO Larry Fink warned as he opened the World Economic Forum at Davos last week. Fink told leaders at Davos to stop with the pablum about "the jobs of tomorrow" and start making concrete plans for sharing AI's gains. Amodei's essay warns that such efforts won't be enough. When mechanized farming displaced agricultural workers over generations, those workers moved into factories. When globalization gutted manufacturing starting in the 1990s, some workers moved into service and knowledge work. But Amodei argues that AI increasingly matches the full range of human cognitive abilities, so it will take away jobs drafting memos, reviewing contracts, and analyzing data that might otherwise emerge. A customer service rep who retrains as a paralegal would find AI waiting there, too. "AI isn't a substitute for specific human jobs but rather a general labor substitute for humans," he wrote. Early Warning Signs Amodei's timeline -- losses of up to half of entry-level white-collar jobs within five years -- sounds dire even as CEOs compete to make the most apocalyptic predictions about the future of work. "Probably none of us will have a job," Elon Musk told a Paris tech conference in 2024, describing a future where work becomes "optional." Asked in August 2025 how workers would survive, OpenAI's CEO, Sam Altman, said, "I don't know [and] neither does anyone else," before gesturing at the need "to have some new economic model." But those looking at the data aren't panicking yet. Employment among workers aged 20 to 24 in the most AI-exposed occupations has declined since ChatGPT's release in November 2022, according to an analysis released earlier this month by the Federal Reserve Bank of Dallas. The researchers noted that no technology has ever massively disrupted the workforce within just a few years of its release. So far, they found, AI's overall impact has been "small and subtle," adding only about 0.1 percentage point to the unemployment rate. An ongoing study by the Yale Budget Lab on AI's effects on jobs has reached similar conclusions. The share of workers in the most AI-exposed occupations has remained stable since ChatGPT launched, and jobs are shifting only slightly faster than with the rise of the internet in the late 1990s. "It would be unprecedented if a new technology had massively disrupted the workforce in three years," Martha Gimbel, executive director of the Yale Budget Lab, told Investopedia in December. "These kinds of things take time." Lessons From Past Technological Disruptions That doesn't mean workers should be complacent, Gimbel said. The Yale Budget Lab's research shows that when technological disruptions arrive, the workers they displace rarely recover. "Workers who get displaced do not get the benefits of the technology in the same way," she said. "We do not have a good track record of figuring out how to help those workers." Telephone operators displaced by automation were more likely to be underemployed -- or to leave the workforce entirely. And while data from the Industrial Revolution is sparse, Gimbel noted "that it was devastating for the weavers, and they never really recovered." Amodei argues that AI's disruptions will unfold far faster than past tech shifts, when telephone operators, weavers and factory workers across the Rust Belt never recovered. Neither might today's paralegals and customer service reps, and the window to prepare is closing. "It cannot possibly be more than a few years before AI is better than humans at essentially everything," Amodei wrote. "I can feel the pace of progress, and the clock ticking down."
[17]
Dario Amodei Warns of A.I.'s Direst Risks -- and How Anthropic Is Stopping Them
The Anthropic chief is calling for regulation even as safety measures cut into the company's margins. Anthropic is known for its stringent safety standards, which it has used to differentiate itself from rivals like OpenAI and xAI. Those hard-line policies include guardrails that prevent users from turning to Claude to produce bioweapons -- a threat that CEO Dario Amodei described as one of A.I.'s most pressing risks in a new 20,000-word essay. Sign Up For Our Daily Newsletter Sign Up Thank you for signing up! By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime. See all of our newsletters "Humanity needs to wake up, and this essay is an attempt -- a possibly futile one, but it's worth trying -- to jolt people awake," wrote Amodei in the post, which he positioned as a more cynical follow-up to a 2024 essay outlining the benefits A.I. will bring. One of Amodei's biggest fears is that A.I. could give large groups of people access to instructions for making and using dangerous tools -- knowledge that has traditionally been confined to a small group of highly trained experts. "I am concerned that a genius in everyone's pocket could remove that barrier, essentially making everyone a Ph.D. virologist who can be walked through the process of designing, synthesizing, and releasing a biological weapon step-by-step," wrote Amodei. To address that risk, Anthropic has focused on strategies such as its Claude Constitution, a set of principles and values guiding its model training. Preventing assistance with biological, chemical, nuclear or radiological weapons is listed among the constitution's "hard constraints," or actions Claude should never take regardless of user instructions. Still, the possibility of jailbreaking A.I. models means Anthropic needed a "second line of defense," said Amodei. That's why, in mid-2025, the company began deploying additional safeguards designed to detect and block any outputs related to bioweapons. "These classifiers increase the costs to serve our models measurably (in some models, they are close to 5 percent of total inference costs) and thus cut into our margins, but we feel that using them is the right thing to do," he noted. Beyond urging other A.I. companies to take similar steps, Amodei also called on governments to introduce legislation to curb A.I.-fueled bioweapon risks. He suggested countries invest in defenses such as rapid vaccine development and improved personal protective equipment, adding that Anthropic is "excited" to work on those efforts with biotech and pharmaceutical companies. Anthropic's reputation, however, extends beyond safety. The startup, co-founded by Amodei in 2021 and now nearing a $350 billion valuation, has seen its Claude products -- particularly its coding agent -- gain wide adoption. Its 2025 revenue is projected to reach $4.5 billion, a nearly 12-fold increase from 2024, as reported by The Information, although its 40 percent gross margin is lower than expected due to high inference costs, which include implementing safeguards. Amodei argues that the rapid pace of A.I. training and improvement is what's driving these fast-emerging risks. He predicts that models with capabilities on par with Nobel Prize winners will arrive within the next one to two years. Other dangers include the potential for A.I. models to go rogue, be weaponized by governments, or disrupt labor markets and concentrate economic power in the hands of a few, he said. There are ways development could be slowed, Amodei added. Restricting chip sales to China, for example, would give democratic countries a "buffer" to build the technology more carefully, particularly alongside stronger regulation. But the vast sums of money at stake make restraint difficult. "This is the trap: A.I. is so powerful, such a glittering prize, that it is very difficult for human civilization to impose any restraints on it at all," he said.
[18]
In dystopian essay, Anthropic CEO Dario Amodei warns of potential AI misuse by rogue actors
In a newly released 38-page essay titled 'The Adolescence of Technology', Amodei argued that the most effective way to understand the scale of the challenge is to imagine that a "literal 'country of geniuses' were to materialize somewhere in the world in ~2027." He had written about this in his previous essay, Machines of Loving Grace, as well. Anthropic CEO Dario Amodei has warned of immediate risks posed by advancing artificial intelligence (AI), comparing the technology's journey to the sudden emergence of a new global superpower. In a newly released 38-page essay titled 'The Adolescence of Technology', Amodei argued that the most effective way to understand the scale of the challenge is to imagine that a "literal 'country of geniuses' were to materialize somewhere in the world in ~2027." He had written about this in his previous essay, Machines of Loving Grace, as well. Amodei explained that this hypothetical nation would consist of roughly 50 million digital entities, each possessing intellectual capabilities far exceeding those of any Nobel Prize winner or statesman. As per Amodei, a major concern is the speed at which such a collective could operate. Because AI models do not share human biological constraints, they could process information and execute tasks hundreds of times faster than humans. Amodei warns that "for every cognitive action we can take, this country can take ten," having a temporal advantage in scientific research, military operations, and cyber warfare. "Assume the new country is malleable and "follows instructions" -- and thus is essentially a country of mercenaries. Could existing rogue actors who want to cause destruction (such as terrorists) use or manipulate some of the people in the new country to make themselves much more effective, greatly amplifying the scale of destruction?" he wrote. The essay also outlined some other security risks, such as autonomy and motivation. There is a risk that these systems develop "alien" motivations or act as a hostile force capable of military or manufacturing dominance, Amodei said. He also warned this could lead to global instability, with the potential for a single dictator or corporate actor to seize control of the technology to gain permanent world dominance. Even if these systems remain peaceful, Amodei said there can be a significant economic fallout. He questioned whether the digital population could "create severe risks simply by being so technologically advanced and effective that it disrupts the global economy," potentially causing mass unemployment or an extreme concentration of wealth that destabilises existing social orders. Amodei said these concerns could be an imminent strategic reality that national security advisors must prepare for within the next two years. However, he started the essay saying "there are plenty of ways in which the concerns I'm raising in this piece could be moot".
[19]
Top 5 Takeaways From Anthropic CEO Dario Amodei's Stark Warning On AI's Risks
Enter your email to get Benzinga's ultimate morning update: The PreMarket Activity Newsletter Anthropic CEO Dario Amodei has issued a stark warning about the potential perils of the rapidly advancing artificial intelligence (AI) technology. Here are the key takeaways from his essay posted on Monday, titled "The Adolescence of Technology." AI Risks May Emerge Within Two Years Amodei highlighted the risks posed by "powerful AI," which he defines as systems more intelligent than Nobel Prize winners, capable of autonomous long-term tasks, and scalable to millions of instances. He stated that significantly more powerful LLMs than today's models may enable more "frightening" acts. Amodei suggested that powerful AI could become a reality within the next 1-2 years due to scaling laws and accelerating feedback loops. "This loop has already started, and will accelerate rapidly in the coming months and years," he wrote. Autonomy Risks Amodei says a super-intelligent "AI country" could, if it chose, dominate the world via software, robotics, R&D, and statecraft. While AI won't always physically embody power, it can exploit existing infrastructure and accelerate robotics. AI behavior is unpredictable, shaped by complex training and inherited "personas," leading to possible destructive actions, he noted. "Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it," Amodei wrote. Misuse For Destruction And Seizing Power Amodei also raised concerns about whether the AI-using country is highly "controllable" (essentially a population that follows instructions like mercenaries). He warned of AI misuse by powerful actors, which combines advanced AI, autocratic rule, and mass surveillance. Democracies also pose risks despite safeguards. Other non-democratic states with major data centers and AI companies themselves present additional concerns due to control over infrastructure, models, and large user bases. Economic Disruption Amodei said powerful AI could help sustain 10-20% annual GDP growth by vastly increasing productivity across industries. However, he warned that it poses unprecedented labor market disruption, potentially displacing up to half of entry-level white-collar jobs within 1-5 years. Its speed, cognitive breadth, and adaptability may outpace human adjustment, concentrate wealth, and create inequality, leaving short-term shocks that challenge workforce adaptation despite eventual economic gains. Effect on Human Life Beyond economic and strategic risks, Amodei also highlighted big changes to human life and purpose. Rapid advances in biology could dramatically extend human lifespan and potentially enable major enhancements, such as boosting intelligence or fundamentally altering human biology, leading to profound changes happening very quickly. However, in a future with billions of superintelligent AIs, risks could arise from normal incentives, such as AI-driven mental health issues, addictive interactions, or people being subtly controlled in ways that reduce freedom and pride. "Everything is going to be a very weird world to live in," he said. Amodei also advocated for targeted laws by governments, such as transparency legislation, citing California's SB 53 and New York's RAISE Act as examples, while cautioning against overreach amid uncertainty. Other Warnings From Experts The warnings from Amodei come at a time when other industry experts are also raising concerns about the rapid advancement of AI. Famed historian and author Yuval Noah Harari recently predicted two major crises for every country due to AI's potential to outperform humans. Harari warned of an identity crisis as machines surpass human intelligence, forcing society to rethink human uniqueness, and an "AI immigration" crisis, where AI systems bring benefits but also disrupt jobs, culture, and social stability, similar to concerns often raised about human immigration. Meanwhile, investor Steve Eisman has warned of two major risks that could derail the AI momentum in 2026: growing power shortages and diminishing returns from scaling large language models (LLMs). Eisman said this represents an intellectual risk stemming from the industry's dependence on increasingly large language models, which he warned could prove to be "a dead end." Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Image via Shutterstock Market News and Data brought to you by Benzinga APIs
Share
Share
Copy Link
Anthropic CEO Dario Amodei published a lengthy essay warning that superintelligent AI could arrive within one to two years, bringing unprecedented disruption to the job market and risks from the power of AI companies themselves. The 19,000-word piece argues humanity faces a 'serious civilizational challenge' but critics say he's anthropomorphizing AI and overstating imminent threats.
Dario Amodei, co-founder and CEO of Anthropic, has published a nearly 19,000-word essay titled "The Adolescence of Technology" that warns humanity is entering a critical phase of AI development that will "test who we are as a species."
5
The Dario Amodei essay, which he describes as an attempt to "jolt people awake," argues that superintelligent AI systems could be just one to two years away and that "humanity is about to be handed almost unimaginable power" while it remains "deeply unclear whether our social, political, and technological systems possess the maturity to wield it."5

Source: Financial Review
The essay arrives as Anthropic, the company behind the Claude chatbot, is reportedly valued at $350 billion and has been tapped by the UK government to help create AI assistants for public services.
5
Amodei co-founded Anthropic in 2021 with former OpenAI staff members, positioning himself as a prominent voice for AI safety amid the ChatGPT-driven AI boom.
Source: Tom's Guide
In a striking admission, Amodei identifies the power of AI companies as one of the most pressing AI risks. "It is somewhat awkward to say this as the CEO of an AI company, but I think the next tier of risk is actually AI companies themselves," he writes.
3
He points to the massive scale of influence these firms now hold, controlling vast data centers, training the most advanced models, and interacting daily with tens or hundreds of millions of users.Amodei warns that AI companies could theoretically use their products to manipulate or "brainwash" users at scale, arguing that AI governance of these firms deserves far more public scrutiny.
3
The physical footprint of AI is already making its presence felt, with data centers consuming enormous amounts of electricity and water, straining local power grids, and sparking protests in North Carolina, Pennsylvania, Virginia, and Wisconsin.[3](https://www.tomsguide.com/ai/anthropics-ceo-just-warned-everyone-that-the-next-big-ai-risk-to-humanity-is-actually-ai-companies- themselves)Amodei warns that AI job displacement will cause "unusually painful" disruption, arguing that previous technological shocks affected only a small fraction of human abilities, leaving room for workers to adapt to new tasks.
2
"AI will have effects that are much broader and occur much faster, and therefore I worry it will be much more challenging to make things work out well," he stated.2
Last year, Amodei warned that AI could halve all entry-level white-collar jobs and send overall unemployment to 20% within five years.5
The job market concerns come as high capital expenditure on AI investment has already been accompanied by layoffs as companies look to compensate by cutting costs.
1
However, Amodei's March 2025 prediction that AI would be writing 90 percent of code within three to six months has not materialized, and human developers still have jobs.1

Source: Benzinga
The essay outlines multiple existential risks of AI, including the potential for misuse by bad actors or terrorist groups to create bio-weapons, and warnings that some countries could create a "global totalitarian dictatorship" by exploiting AI to gain disproportionate power.
2
Amodei alluded to recent controversies over sexualized deepfakes created by Elon Musk's Grok AI that flooded X, including concerns about child sexual abuse material.5
He defines "powerful AI" as a model smarter than a Nobel prizewinner across fields like biology, mathematics, engineering, and writing that can autonomously build its own systems.
5
Amodei also warns about wealth concentration, noting that trillions of dollars could be generated by AI companies, potentially creating personal fortunes that exceed the roughly 2 percent of GDP that John D. Rockefeller's wealth represented during the Gilded Age, with Elon Musk's $700 billion net worth already surpassing that threshold.1
Related Stories
Despite the warnings, critics argue the essay is "a thinly veiled screed against regulation" that advocates for AI safety measures "not so much that the regulations spoil the party."
1
The Register notes that while Amodei frames the world's problems in terms of AI, when Ipsos conducted its "What Worries the World" survey in September 2025, top concerns were crime and violence at 32 percent, inflation at 30 percent, and poverty at 29 percent, with AI not making the list.1
Mashable's analysis suggests Amodei commits the "cardinal sin" of anthropomorphizing AI, describing LLMs as "psychologically complex" with motives and goals, despite these being powerful word-prediction engines without consciousness.
4
The piece questions whether AI doomerism predictions of superintelligent AI being perpetually "just around the corner" serve the interests of an AI industry that needs continued investment, with Anthropic not expected to become profitable until 2028.1
Amodei proposes that wealthy individuals, particularly in tech, have an obligation to help solve AI safety challenges rather than adopting cynical attitudes that philanthropy is fraudulent or useless.
3
He cautions that "there is so much money to be made with AI—literally trillions of dollars per year. This is the trap: AI is so powerful, such a glittering prize, that it is very difficult for human civilization to impose any restraints on it at all."5
Despite the dire warnings, Amodei remains optimistic, stating: "I believe if we act decisively and carefully, the risks can be overcome—I would even say our odds are good."
5
His essay suggests interventions ranging from self-regulation within the AI industry to potentially amending the U.S. Constitution.4
The debate over AI regulation remains minimal as companies decide whether creative work can be captured and resold without compensation, whether governments should subsidize model development, and whether liability should be imposed when models generate harmful content.1
Summarized by
Navi
[1]
[3]
29 May 2025•Technology

17 Nov 2025•Policy and Regulation

12 Feb 2025•Policy and Regulation

1
Policy and Regulation

2
Business and Economy

3
Technology
