Pro-Human AI Declaration brings together political rivals to demand human-centered AI governance

2 Sources

Share

In an unprecedented alliance, political adversaries from Steve Bannon to Ralph Nader have signed the Pro-Human AI Declaration, demanding that artificial intelligence development prioritize human values over corporate interests. The Future of Life Institute orchestrated the coalition of 40+ organizations, marking a significant shift in AI governance debates.

Political Rivals Unite Behind AI Governance Framework

A secret gathering in a New Orleans Marriott this January brought together approximately 90 political, community, and thought leaders for an extraordinary purpose: finding common ground on artificial intelligence. The participants represented a spectrum rarely seen in one room—progressive power brokers who drafted Bernie Sanders sat alongside MAGA talking heads, while church leaders shared space with labor union representatives

1

. The Future of Life Institute, convened by MIT professor Max Tegmark, orchestrated this unlikely coalition with a deliberate design choice: no one from the AI industry was invited

1

.

The result, released Wednesday, is the Pro-Human AI Declaration—a concise document with five guidelines demanding that AI development center on humanity first, with pointed focus on preventing concentration of power, preserving the well-being of children and families, and protecting human agency

1

. More than 40 organizations have backed the declaration, representing what may be the broadest bipartisan support for AI policy ever assembled

2

.

Diverse Coalition Signals Shift in AI Governance Debates

The signatories include Steve Bannon, Glenn Beck, and Richard Branson alongside Ralph Nader, Susan Rice, and Nobel Prize-winning economist Daron Acemoglu

2

.

Source: NBC

Source: NBC

Major unions like the AFL-CIO, the American Federation of Teachers, and the Screen Writers Guild joined forces with religious organizations including the G20 Interfaith Forum Association and the Congress of Christian Leaders

1

. Signal Foundation president Meredith Whittaker and AI pioneer Yoshua Bengio also signed on, lending technical credibility to the effort

1

2

.

Joe Allen, cofounder of Humans First and former correspondent for Bannon's War Room, described the declaration as "the product of painstaking consensus among intellectuals and activists who have been thinking about the dangers and downsides of artificial intelligence for many years"

2

. Despite jarring political tensions, participants quickly agreed on core issues: autonomous lethal weapons should not be solely AI-powered, AI companies should not leverage children's emotional attachment for profit, and AI should not be granted legal personhood. The least popular position still received approval from 94% of attendees

1

.

Demanding Trustworthy and Controllable AI Over Corporate Interests

The declaration's preamble frames a stark choice: "As companies race to develop and deploy AI systems, humanity faces a fork in the road," describing one path where AI replacing human roles as creators, counselors, and caregivers concentrates power in unaccountable institutions

2

. The alternative path envisions trustworthy and controllable AI tools that amplify human potential, enhance human dignity, and strengthen communities

2

.

The five main topics include "Keeping Humans in Charge" and "Responsibility and Accountability for AI Companies," with detailed statements addressing concerns like "No AI Monopolies," "Democratic Authority Over Major Transitions," and "Shared Prosperity"

2

. This approach marks a deliberate departure from corporate influence that has dominated AI policy discussions.

From Asilomar to New Orleans: A Changing Landscape

Nearly a decade ago, the Future of Life Institute released the Asilomar AI Principles—23 guidelines drafted at the 2017 conference that drew over 100 tech luminaries including Sam Altman, Elon Musk, and Demis Hassabis, with endorsements from representatives at Google, Intel, and Apple

1

. This time, the exclusion of Big Tech reflects growing concerns about corporate interests dominating conversations about AI's societal impact.

Emilia Javorsky, director of the Futures Program at FLI, explained that excluding industry representatives was "a very deliberate design choice," noting that corporate interests inevitably become the dominant perspective "just by nature of their size and weight and funding cap"

1

. This strategic shift acknowledges that civil society organizations, unions, and advocacy groups need space to articulate human values without corporate pressure.

What This Means for AI's Future Across the Political Spectrum

As AI systems become dramatically more capable—reshaping software development jobs and outperforming scientists in areas like mathematics—the declaration arrives at a critical moment

2

. Brendan Steinhauser, director of the Alliance for Secure AI, emphasized the urgency: "Big Tech is racing to create AI smarter than humans. If we want AI to benefit humanity and not just Silicon Valley CEOs, then we must come together to protect our future"

2

.

Randi Weingarten, president of the American Federation of Teachers, discovered that her organization's "common sense guardrails" for using AI in schools aligned remarkably with FLI's worldview. "We've been on parallel tracks for quite a while without knowing it," she noted

1

.

Source: The Verge

Source: The Verge

This convergence suggests that concerns about human-centered AI governance transcend partisan divisions, creating potential for meaningful policy action that prioritizes human dignity over technological acceleration.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo