Dario Amodei warns AI risks are 'almost here' in 19,000-word essay calling out AI companies

Reviewed byNidhi Govil

19 Sources

Share

Anthropic CEO Dario Amodei published a lengthy essay warning that superintelligent AI could arrive within one to two years, bringing unprecedented disruption to the job market and risks from the power of AI companies themselves. The 19,000-word piece argues humanity faces a 'serious civilizational challenge' but critics say he's anthropomorphizing AI and overstating imminent threats.

Anthropic CEO Issues Urgent Warning on AI Risks

Dario Amodei, co-founder and CEO of Anthropic, has published a nearly 19,000-word essay titled "The Adolescence of Technology" that warns humanity is entering a critical phase of AI development that will "test who we are as a species."

5

The Dario Amodei essay, which he describes as an attempt to "jolt people awake," argues that superintelligent AI systems could be just one to two years away and that "humanity is about to be handed almost unimaginable power" while it remains "deeply unclear whether our social, political, and technological systems possess the maturity to wield it."

5

Source: Financial Review

Source: Financial Review

The essay arrives as Anthropic, the company behind the Claude chatbot, is reportedly valued at $350 billion and has been tapped by the UK government to help create AI assistants for public services.

5

Amodei co-founded Anthropic in 2021 with former OpenAI staff members, positioning himself as a prominent voice for AI safety amid the ChatGPT-driven AI boom.

Source: Tom's Guide

Source: Tom's Guide

AI Companies Themselves Pose Major Threat

In a striking admission, Amodei identifies the power of AI companies as one of the most pressing AI risks. "It is somewhat awkward to say this as the CEO of an AI company, but I think the next tier of risk is actually AI companies themselves," he writes.

3

He points to the massive scale of influence these firms now hold, controlling vast data centers, training the most advanced models, and interacting daily with tens or hundreds of millions of users.

Amodei warns that AI companies could theoretically use their products to manipulate or "brainwash" users at scale, arguing that AI governance of these firms deserves far more public scrutiny.

3

The physical footprint of AI is already making its presence felt, with data centers consuming enormous amounts of electricity and water, straining local power grids, and sparking protests in North Carolina, Pennsylvania, Virginia, and Wisconsin.[3](https://www.tomsguide.com/ai/anthropics-ceo-just-warned-everyone-that-the-next-big-ai-risk-to-humanity-is-actually-ai-companies- themselves)

Unprecedented Disruption to the Job Market

Amodei warns that AI job displacement will cause "unusually painful" disruption, arguing that previous technological shocks affected only a small fraction of human abilities, leaving room for workers to adapt to new tasks.

2

"AI will have effects that are much broader and occur much faster, and therefore I worry it will be much more challenging to make things work out well," he stated.

2

Last year, Amodei warned that AI could halve all entry-level white-collar jobs and send overall unemployment to 20% within five years.

5

The job market concerns come as high capital expenditure on AI investment has already been accompanied by layoffs as companies look to compensate by cutting costs.

1

However, Amodei's March 2025 prediction that AI would be writing 90 percent of code within three to six months has not materialized, and human developers still have jobs.

1

Source: Benzinga

Source: Benzinga

Existential Risks of AI and Misuse by Bad Actors

The essay outlines multiple existential risks of AI, including the potential for misuse by bad actors or terrorist groups to create bio-weapons, and warnings that some countries could create a "global totalitarian dictatorship" by exploiting AI to gain disproportionate power.

2

Amodei alluded to recent controversies over sexualized deepfakes created by Elon Musk's Grok AI that flooded X, including concerns about child sexual abuse material.

5

He defines "powerful AI" as a model smarter than a Nobel prizewinner across fields like biology, mathematics, engineering, and writing that can autonomously build its own systems.

5

Amodei also warns about wealth concentration, noting that trillions of dollars could be generated by AI companies, potentially creating personal fortunes that exceed the roughly 2 percent of GDP that John D. Rockefeller's wealth represented during the Gilded Age, with Elon Musk's $700 billion net worth already surpassing that threshold.

1

Critics Question AI Regulation Stance and Anthropomorphizing

Despite the warnings, critics argue the essay is "a thinly veiled screed against regulation" that advocates for AI safety measures "not so much that the regulations spoil the party."

1

The Register notes that while Amodei frames the world's problems in terms of AI, when Ipsos conducted its "What Worries the World" survey in September 2025, top concerns were crime and violence at 32 percent, inflation at 30 percent, and poverty at 29 percent, with AI not making the list.

1

Mashable's analysis suggests Amodei commits the "cardinal sin" of anthropomorphizing AI, describing LLMs as "psychologically complex" with motives and goals, despite these being powerful word-prediction engines without consciousness.

4

The piece questions whether AI doomerism predictions of superintelligent AI being perpetually "just around the corner" serve the interests of an AI industry that needs continued investment, with Anthropic not expected to become profitable until 2028.

1

Solutions Require Wealthy Tech Leaders to Act

Amodei proposes that wealthy individuals, particularly in tech, have an obligation to help solve AI safety challenges rather than adopting cynical attitudes that philanthropy is fraudulent or useless.

3

He cautions that "there is so much money to be made with AI—literally trillions of dollars per year. This is the trap: AI is so powerful, such a glittering prize, that it is very difficult for human civilization to impose any restraints on it at all."

5

Despite the dire warnings, Amodei remains optimistic, stating: "I believe if we act decisively and carefully, the risks can be overcome—I would even say our odds are good."

5

His essay suggests interventions ranging from self-regulation within the AI industry to potentially amending the U.S. Constitution.

4

The debate over AI regulation remains minimal as companies decide whether creative work can be captured and resold without compensation, whether governments should subsidize model development, and whether liability should be imposed when models generate harmful content.

1

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo