2 Sources
2 Sources
[1]
The AI political resistance has arrived
In early January, a group of 90 or so political, community and thought leaders gathered in a New Orleans Marriott for a secret conference on artificial intelligence -- so secret, in fact, that no one knew who else had been invited until they walked into the room. Church leaders and conservative academics were sitting next to labor union representatives. Progressive power brokers who'd drafted Bernie Sanders to run for president suddenly found themselves breathing the same air as MAGA talking heads. And the AI thought leaders who'd invited them to New Orleans were hoping that none of them would kill each other. On Wednesday, the Future of Life Institute, one of the most authoritative voices in the world of AI safety, released the results of that meeting: the Pro-Human AI Declaration, a concise document with five guidelines on how AI development must be centered on humanity first, with a pointed focus on avoiding the concentration of power in the hands of the powerful; preserving the well-being of children, families and communities; and preserving human agency and liberty. It has the broadest range of signatories that I personally have ever seen on a single political document. Powerful civic organizations well outside the tech world have signed onto the Declaration: major unions like the AFL-CIO, the American Federation of Teachers, and the Screen Writers Guild; religious organizations like the G20 Interfaith Forum Association and the Congress of Christian Leaders; the Progressive Democrats of America, the group that drafted Bernie Sanders to run as a Democrat in 2016; think tanks like the conservative Institute for Family Studies and advocacy groups like Parents RISE!. The individual signatories range even further: Democratic presidential candidate Ralph Nader, AFT president Randi Weingarten, Signal Foundation president Meredith Whittaker, The Blaze's Glenn Beck, War Room's Steve Bannon, Virgin Group founder Sir Richard Branson, former National Security Advisor Susan Rice, SAG-AFTRA members, leaders of major evangelical organizations. More are expected to sign on in the next several days. The meeting was under Chatham House Rules and the list of attendees remains private. But the participants who agreed to speak to The Verge about the experience said that they'd been invited by Max Tegmark, the co-founder of FLI and an MIT professor who had been named to the TIME 100 AI list. "We spent a lot of time talking to him over the course of the last few months," Weingarten, a powerful teachers' union advocate, told The Verge in a phone interview. Though she was unable to make it to New Orleans, she was involved in drafting the document, and she'd found remarkable similarities in FLI's worldview and AFT's own "common sense guardrails" for using AI in schools. "We've been on parallel tracks for quite a while without knowing it." Joe Allen, the cofounder of Humans First and a former correspondent for Bannon's show War Room, told The Verge that Tegmark had also invited him to New Orleans, as well as an earlier proof-of-concept meeting in Manhattan. Though the wide range of attendees was jarring and the political tensions weren't completely gone, Allen was surprised by how quickly they all agreed on similar topics: autonomous lethal weapons should not be solely AI-powered. AI companies should not leverage children's emotional attachment for profit. AI should not be granted legal personhood. (The least popular position in the Declaration still got approved by 94% of attendees.) "I think about it like, if there's knowledge that there's poison in the water supply, or that drugs are flooding schools -- anything like that, in general -- most people are going to be against it and it isn't partisan," he said. AI was slightly trickier in that people's general opinion about specific AI models divided along party lines -- Grok was the "based" AI and Anthropic was the "woke" AI -- but to Allen, the distinction was meaningless. "Like, what does 'based' and 'woke' even mean at this point?" Nearly a decade ago, FLI had laid out a more optimistic set of principles for AI research -- 23 principles, to be exact, written during the 2017 Asilomar Conference for Beneficial AI, which drew over 100 tech luminaries of the day. Signatories and endorsers of the Asilomar AI Principles included AI leaders like Sam Altman, Elon Musk, and Demis Hannabis; luminaries like Stephen Hawking and Ray Kurzweil, and representatives from major companies like Google, Intel and Apple. But this time, no one from the industry was invited, to say nothing of people on the level of Altman and Musk. "That was actually a very deliberate design choice," Emilia Javorsky, the director of the Futures Program at FLI, told The Verge. Whenever she'd attended conferences and events about AI's impact across society, she noticed that corporate interests would eventually become the dominant perspective in the room, "just by nature of their size and weight and funding capabilities." Instead, the invitees were from civil society organizations, all of whom were experiencing mass disruption due to artificial intelligence, and all of whom were fed up with Big Tech shrugging off their concerns. Anthony Aguirre, another co-founder of FLI and a prominent cosmology professor at UC Santa Cruz, emphasized that this declaration was not their attempt to redo the Asilomar Principles, but a somber acknowledgement of a dark new reality -- one where their former colleagues were now the heads of major corporations, trying to achieve artificial general intelligence before their rivals did and satisfy shareholders before addressing safety. The power to steer AI's development was increasingly concentrated in the hands of the few, and the Trump administration's aggressive deregulation had further empowered them. "Other than the overall mass of humanity, there was one entity that would have put meaningful control on what they could do, and that was the US government," he told The Verge. "Now that it's backing them and wants to keep them unrestrained, the only thing that's a real threat are other companies." In the absence of Big Tech and public scrutiny, said Javorsky, there was something unique about how quickly this group coalesced around the same issues and came to the same conclusions. Over the course of the next few days, Javorsky kept hearing the same refrain: "'We will not have the luxury of debating all of those other issues if we don't get this thing right. So let's get this thing right.'" In Weingarten's view, the Declaration served as the mission statement of what she called a "key demanding coalition" -- a strategic alliance of political opponents -- and a way to keep all their efforts coordinated against a government that elevated enterprise over society. "What is really important is that there are other people who have said, let's try to create a bigger coalition to say that we need humanity to be at the center of AI," she noted. On its own, AFT could have perhaps pushed the issue of child safety, but there was only so much pressure they could exert on lawmakers. But if they joined forces with several other trade unions, plus religious organizations, plus some allies on the other side of the aisle? Now those lawmakers would be nervous. "If the government won't do it, then the people have to force the government to do it. And you start with a statement of principles." "If there's one statement I would make about the whole thing, which is what I said to the group when I had their attention, is that no one is going to engineer a pro-human movement. The only thing you can do is inspire it," said Allen. "I do think that statements like this should inspire a pro-human movement. Like a fundamental document that's setting the tone...There's no amount of social engineering, or money, or media, or any of that, that's really gonna do it." Exactly what that looks like, however, remains unclear -- or at least, not easily translated into elections. FLI is running an ad campaign called "Protect What's Human," but as a 501(c)3, cannot endorse or campaign for candidates or ballot initiatives during the midterms. They did, however, conduct a poll with Tavern Research in February, testing the popularity of the Declaration's principles among voters. Though respondents were split neatly down partisan lines in whom they voted for and which party they belonged to, they overwhelmingly supported the statements that appeared in the Declaration, by a wide margin. The worst-performing principles -- AI must not create monopolies or concentrate control in a few hands -- still garnered 69% support from respondents. The best-performing principle -- humans needed to stay in charge of AI and prevent it from harming children, families and communities -- won 80% support. To Javorsky, the poll results validated the conference's points. "It's one thing to have a whole bunch of civil society actors in a room together and think something's representative. But you have to actually validate those with real people. This is actually resonating with them." When we spoke on Thursday, Anthropic, which had recently floated the possibility that its AI had gained consciousness, was in the middle of a fight with the Pentagon over whether the military could use its AI for autonomous lethal weapons without human oversight. By Friday evening, OpenAI threw Anthropic under the bus to score their own Pentagon contract. Days after that fight resolved, and the United States used Anthropic-powered tools to assassinate the Ayatollah of Iran, and several more reports of looming AI layoffs emerged, and the scale of the Pentagon's asks on mass surveillance was made more evident, Alan Minsky, the CEO of the Progressive Democrats of America and a meeting attendee, told The Verge that he could not foresee any political opposition towards the declaration, either from the left or the right. "Altman and Musk, certainly, have taken a flippant manner towards what are serious threats to communities: the psychological deterioration of a population that lives increasingly online, the impact of continual economic maldistribution of wealth, and, of course, contempt for the idea that basic protection must come before profits," he said. " The risk of an existential threat to humanity is no longer something they even blink at. As the public realizes that this is their attitude, that they have utter contempt for the average person's welfare -- yes, we think the public will be on our side."
[2]
Pro-human AI declaration brings together unlikely group calling for trustworthy tech
An unlikely band of prominent business, religious, government and academic leaders have set aside their political differences and signed onto a new declaration of human rights for the AI age. The Pro-Human AI Declaration, released Wednesday and backed by more than 40 organizations, asserts the importance of humans and human values as AI becomes increasingly powerful and, in some regards, humanlike. Signatories include former Trump administration adviser Steve Bannon, conservative firebrand Glenn Beck and billionaire mogul Richard Branson, as well as consumer advocate Ralph Nader, Biden administration national security adviser Susan Rice and Nobel Prize-winning economist Daron Acemoglu. "As companies race to develop and deploy AI systems, humanity faces a fork in the road," the statement's preamble declares. "One path is a race to replace: humans replaced as creators, counselors, caregivers and companions, then in most jobs and decision-making roles, concentrating ever more power in unaccountable institutions and their machines." "There is a better path," the statement continues, "where trustworthy and controllable AI tools amplify rather than diminish human potential, empower people, enhance human dignity, protect individual liberty, strengthen families and communities, preserve self-governance and help create unprecedented health and prosperity." The declaration was drafted by a coalition of organizations from across the political spectrum, including the Congress of Christian Leaders, the American Federation of Teachers and the Progressive Democrats of America. The Future of Life Institute, a nonprofit advocacy group whose mission is to guide advanced technology toward beneficial purposes and avert large-scale risks to humanity, convened the participants and facilitated the drafting process. The declaration was drafted in multiple in-person gatherings and finalized after a wider ratification meeting in New Orleans in January. The declaration, also signed by AI pioneer Yoshua Bengio, covers five main topics with titles such as "Keeping Humans in Charge" and "Responsibility and Accountability for AI Companies." Within each topic area, a list of finer-grained statements detail the signers' pro-human ideology. "No AI Monopolies," "Democratic Authority Over Major Transitions" and "Shared Prosperity" are several of the statements comprising the second major topic area, entitled "Avoiding Concentration of Power." Joe Allen, senior fellow at Humans First, a nonpartisan social advocacy organization campaigning to raise awareness about the future of AI, and the former technology editor at Steve Bannon's War Room podcast, told NBC News the declaration was "the product of painstaking consensus among intellectuals and activists who have been thinking about the dangers and downsides of artificial intelligence for many years." According to Allen, the signers spanned a wide "axis, with reasonable techno-optimists at the top and a few of us quasi-Luddites below." "As with free speech, and freedom in general, the ideal position is that every human being -- even one's ideological opponents -- has some say over a fundamentally anti-human technology," Allen shared in written comments. AI systems have become dramatically more capable over the past few years and even months, with AI systems reshaping or eliminating software development jobs and outperforming scientists' ability to create new tests to measure their performance in areas like mathematics. "Big Tech is racing to create AI smarter than humans," said Brendan Steinhauser, director of the Alliance for Secure AI, a Washington, D.C.-based advocacy organization and a former Republican campaign strategist. "The Alliance for Secure AI remains steadfast in its mission to keep humanity in control of AI, not the other way around." "If we want AI to benefit humanity and not just Silicon Valley CEOs," Steinhauser told NBC News, "then we must come together to protect our future."
Share
Share
Copy Link
In an unprecedented alliance, political adversaries from Steve Bannon to Ralph Nader have signed the Pro-Human AI Declaration, demanding that artificial intelligence development prioritize human values over corporate interests. The Future of Life Institute orchestrated the coalition of 40+ organizations, marking a significant shift in AI governance debates.
A secret gathering in a New Orleans Marriott this January brought together approximately 90 political, community, and thought leaders for an extraordinary purpose: finding common ground on artificial intelligence. The participants represented a spectrum rarely seen in one room—progressive power brokers who drafted Bernie Sanders sat alongside MAGA talking heads, while church leaders shared space with labor union representatives
1
. The Future of Life Institute, convened by MIT professor Max Tegmark, orchestrated this unlikely coalition with a deliberate design choice: no one from the AI industry was invited1
.The result, released Wednesday, is the Pro-Human AI Declaration—a concise document with five guidelines demanding that AI development center on humanity first, with pointed focus on preventing concentration of power, preserving the well-being of children and families, and protecting human agency
1
. More than 40 organizations have backed the declaration, representing what may be the broadest bipartisan support for AI policy ever assembled2
.The signatories include Steve Bannon, Glenn Beck, and Richard Branson alongside Ralph Nader, Susan Rice, and Nobel Prize-winning economist Daron Acemoglu
2
.
Source: NBC
Major unions like the AFL-CIO, the American Federation of Teachers, and the Screen Writers Guild joined forces with religious organizations including the G20 Interfaith Forum Association and the Congress of Christian Leaders
1
. Signal Foundation president Meredith Whittaker and AI pioneer Yoshua Bengio also signed on, lending technical credibility to the effort1
2
.Joe Allen, cofounder of Humans First and former correspondent for Bannon's War Room, described the declaration as "the product of painstaking consensus among intellectuals and activists who have been thinking about the dangers and downsides of artificial intelligence for many years"
2
. Despite jarring political tensions, participants quickly agreed on core issues: autonomous lethal weapons should not be solely AI-powered, AI companies should not leverage children's emotional attachment for profit, and AI should not be granted legal personhood. The least popular position still received approval from 94% of attendees1
.The declaration's preamble frames a stark choice: "As companies race to develop and deploy AI systems, humanity faces a fork in the road," describing one path where AI replacing human roles as creators, counselors, and caregivers concentrates power in unaccountable institutions
2
. The alternative path envisions trustworthy and controllable AI tools that amplify human potential, enhance human dignity, and strengthen communities2
.The five main topics include "Keeping Humans in Charge" and "Responsibility and Accountability for AI Companies," with detailed statements addressing concerns like "No AI Monopolies," "Democratic Authority Over Major Transitions," and "Shared Prosperity"
2
. This approach marks a deliberate departure from corporate influence that has dominated AI policy discussions.Related Stories
Nearly a decade ago, the Future of Life Institute released the Asilomar AI Principles—23 guidelines drafted at the 2017 conference that drew over 100 tech luminaries including Sam Altman, Elon Musk, and Demis Hassabis, with endorsements from representatives at Google, Intel, and Apple
1
. This time, the exclusion of Big Tech reflects growing concerns about corporate interests dominating conversations about AI's societal impact.Emilia Javorsky, director of the Futures Program at FLI, explained that excluding industry representatives was "a very deliberate design choice," noting that corporate interests inevitably become the dominant perspective "just by nature of their size and weight and funding cap"
1
. This strategic shift acknowledges that civil society organizations, unions, and advocacy groups need space to articulate human values without corporate pressure.As AI systems become dramatically more capable—reshaping software development jobs and outperforming scientists in areas like mathematics—the declaration arrives at a critical moment
2
. Brendan Steinhauser, director of the Alliance for Secure AI, emphasized the urgency: "Big Tech is racing to create AI smarter than humans. If we want AI to benefit humanity and not just Silicon Valley CEOs, then we must come together to protect our future"2
.Randi Weingarten, president of the American Federation of Teachers, discovered that her organization's "common sense guardrails" for using AI in schools aligned remarkably with FLI's worldview. "We've been on parallel tracks for quite a while without knowing it," she noted
1
.
Source: The Verge
This convergence suggests that concerns about human-centered AI governance transcend partisan divisions, creating potential for meaningful policy action that prioritizes human dignity over technological acceleration.
Summarized by
Navi
[1]
27 May 2025•Science and Research

22 Oct 2025•Technology

06 Feb 2025•Policy and Regulation
1
Policy and Regulation

2
Technology

3
Technology
