Pro-Human AI Declaration unites unlikely allies on responsible AI development framework

3 Sources

Share

The Future of Life Institute released the Pro-Human AI Declaration, a bipartisan framework signed by hundreds including Steve Bannon and Susan Rice. The document establishes five pillars for human-centered AI governance, prohibits superintelligence development without consensus, and mandates pre-deployment testing—addressing the urgent need for AI regulation exposed by recent Pentagon-Anthropic tensions.

Bipartisan Consensus Emerges on AI Development

The Future of Life Institute has released the Pro-Human AI Declaration, a framework for responsible AI development that achieves something Washington has failed to produce: bipartisan consensus on artificial intelligence regulation

1

. The document, signed by hundreds of experts and public figures including former Trump advisor Steve Bannon and Obama's National Security Advisor Susan Rice, establishes guidelines to keep AI development focused on human values rather than corporate interests

2

3

.

Source: TechCrunch

Source: TechCrunch

Max Tegmark, the MIT physicist who organized the effort, notes that polling suddenly shows 95% of Americans oppose an unregulated race to superintelligence

1

. The declaration's release follows the Pentagon-Anthropic standoff, where Defense Secretary Pete Hegseth designated Anthropic a "supply chain risk" after the company refused unlimited military access to its technology—exposing how costly Congressional inaction has become

1

.

Five Pillars for Human-Centric AI

The Pro-Human AI Declaration opens with a stark observation: humanity faces a fork in the road between "a race to replace" humans as workers and decision-makers, and a path where trustworthy tech and controllable AI tools amplify human potential

3

. The framework rests on five pillars: keeping humans in charge, preventing concentration of power, protecting the human experience, preserving individual liberty, and holding AI companies legally accountable

1

.

Among its provisions, the declaration prohibits superintelligence development until scientific consensus confirms it can be done safely with democratic buy-in. It mandates mandatory off-switches on powerful systems and bans architectures capable of self-replication, autonomous self-improvement, or resistance to shutdown

1

. The document also addresses AI monopolies and calls for "Democratic Authority Over Major Transitions" and "Shared Prosperity"

3

.

Secret Meeting Produces Unlikely Coalition

In January, approximately 90 political, community, and thought leaders gathered at a New Orleans Marriott under Chatham House Rules for a secret conference on artificial intelligence

2

. Church leaders sat beside labor union representatives, while progressive power brokers who drafted Bernie Sanders found themselves alongside MAGA talking heads. No one knew who else had been invited until they entered the room

2

.

Source: The Verge

Source: The Verge

The bipartisan framework attracted support from major unions like the AFL-CIO, American Federation of Teachers, and Screen Writers Guild, alongside religious organizations and advocacy groups

2

. Individual signatories include Ralph Nader, Signal Foundation president Meredith Whittaker, Glenn Beck, Richard Branson, former Joint Chiefs Chairman Mike Mullen, and Nobel Prize-winning economist Daron Acemoglu

3

. The least popular position still received approval from 94% of attendees

2

.

Child Safety as Pressure Point

Tegmark sees child safety as the issue most likely to break the current regulatory impasse. The declaration calls for mandatory pre-deployment testing of AI products, particularly chatbots and companion apps aimed at younger users, covering risks including increased suicidal ideation, exacerbation of mental health conditions, and emotional manipulation

1

.

"If some creepy old man is texting an 11-year-old pretending to be a young girl and trying to persuade this boy to commit suicide, the guy can go to jail for that," Tegmark explained. "We already have laws. It's illegal. So why is it different if a machine does it?"

1

He believes once pre-deployment testing is established for children's products, the scope will expand to include testing for bioweapon assistance and threats to government stability.

Industry Voices Notably Absent

Unlike the 2017 Asilomar AI Principles, which drew signatures from Sam Altman, Elon Musk, Demis Hassabis, and representatives from Google, Intel, and Apple, no one from the AI industry was invited to this effort

2

. Emilia Javorsky, director of the Futures Program at Future of Life Institute, called this "a very deliberate design choice," noting that corporate interests typically dominate such conversations

2

.

Joe Allen of Humans First, a former correspondent for Bannon's War Room, told NBC News the declaration represents "painstaking consensus among intellectuals and activists who have been thinking about the dangers and downsides of artificial intelligence for many years"

3

. The bipartisan coalition demonstrates that when it comes to protecting human agency against increasingly capable AI systems, political divisions take a back seat to shared humanity.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo