2 Sources
2 Sources
[1]
Fears About A.I. Prompt Talks of Super PACs to Rein In the Industry
Theodore Schleifer writes about the intersection of money, technology and politics. As artificial intelligence companies gain political power and prepare to pour cash into the midterm elections, some of those worried about the dangers of an unfettered A.I. industry are setting out to raise tens of millions of dollars to back candidates of their own. Talks have revolved around plans to raise about $50 million for a new network of super PACs that would back midterm candidates in both parties who prioritize A.I. regulations, according to four people briefed on the discussions, who insisted on anonymity to disclose the closely held plans. The founder of the network, Brad Carson, a Democratic former congressman from Oklahoma, on Tuesday confirmed some details about its plans in an interview with The New York Times. The super PACs are meant to counter a group called Leading the Future, which generally opposes strong A.I. regulations and has raised $100 million combined from Andreessen Horowitz, a prominent A.I. investing firm, and the family of Greg Brockman, a co-founder of OpenAI. Leading the Future has chosen its first candidate to oppose in Alex Bores, a Democrat running for a U.S. House seat in New York who has championed A.I. safety legislation. Its allied nonprofit group said on Monday that it was planning a $10 million campaign to support the federal A.I. legislation. In recent weeks, conversations about challenging Leading the Future have accelerated among employees at Anthropic, an A.I. company that favors more guardrails for the technology. The discussions have also included allied donors who are loosely tied to the effective altruism movement, a community of activists whose beliefs include concerns about the safety of A.I. Anthropic, OpenAI's main start-up rival in the United States, was started by a breakaway group of OpenAI employees who felt the company was insufficiently serious about protecting society from the dangers of A.I. Valued recently at $183 billion, Anthropic has at times been a thorn in the side of the Trump White House, which has generally pushed for acceleration of the domestic A.I. industry and dismissed safety concerns as coming from "doomers." (The New York Times has sued OpenAI and Microsoft, claiming copyright infringement of news content related to A.I. systems. The two companies have denied the suit's claims.) The goal of the new super PACs, according to Mr. Carson, is to raise $50 million initially, with some of the other people involved hoping to match the $100 million raised by the rival group. The new groups are so motivated to challenge Leading the Future, the Andreessen Horowitz-backed super PAC, that some allies have joked about naming their super PACs "z16a," an inversion of Andreessen Horowitz's commonly used nickname, "a16z." The money for the super PACs will come from a new 501(c)(4) group, Public First, which will not be required to disclose its donors. Public First will in turn plan to evenly fund one group for Democratic candidates, Jobs and Democracy PAC, and another for Republicans, Defending Our Values PAC. Some money for the advocacy network is likely to come from Anthropic's wealthy executives and rank-and-file employees, rather than from the company itself. A person close to Anthropic said the company and its executives were exploring various options for political engagement but had not yet decided on a group to support or deployed money into any groups. Jack Clark, a co-founder of Anthropic, said at an event in Washington in September that "we are actively working" on a super PAC but did not offer details. "There's a huge community of companies and organizations that actually care about getting A.I. right," he said. "And then there might be a community inside the technology industry that has a different view." Mr. Carson, who until this year served as the president of the University of Tulsa, said in the interview that he knew "having been a congressman, how influential big money can be." So when he read at his home in Oklahoma about Leading the Future's plan, he said, he thought that "voices that represent the public interest needed to try to do something in response." He added that "$50 million and 85 percent of the people on your side is more than enough to defeat $100 million, or even $200 million for that matter." Mr. Carson has been speaking to wealthy donors, including at Anthropic and even at OpenAI, about the idea ever since news of Leading the Future emerged in August. He has ties to donors through two nonprofit groups he formed last year to sound the alarm about A.I. One group, Americans for Responsible Innovation, has backers that include the Omidyar Network, the main philanthropic vehicle of the eBay founder Pierre Omidyar, and Coefficient Giving, the vehicle of the Facebook co-founder Dustin Moskovitz, who is also a prominent effective altruist. The two-term congressman, who served from 2001 to 2005, is working with a former Republican congressman, Chris Stewart of Utah, on the new super PACs. Mr. Carson has also been consulting with Jay Shooster, a political strategist who has deep ties to the effective-altruist donor community, the people said. Mr. Carson downplayed any support from the effective-altruism movement, saying that he did not identify with it and that it had "no involvement" with his group. Money from the A.I. industry is poised to be a major story line of the 2026 midterms. Companies like OpenAI are looking to take a cue from how the cryptocurrency industry achieved many of its goals after spending hundreds of millions to back Republicans in 2024. These businesses hope to defeat the likes of Mr. Bores and State Senator Scott Wiener, a Democratic candidate for the U.S. House in San Francisco who has also focused on A.I. safety. Mr. Bores and Mr. Wiener are likely to be early beneficiaries of the new super PACs. But the A.I. industry, in contrast with the crypto industry, is not rowing in one direction politically. Some A.I. proponents also believe that the industry should not accumulate too much power, fearing that society will become beholden to the companies' business interests. The effective altruism movement is similarly well funded, but it has a damaged brand that makes political engagement tricky. The movement counted the FTX founder Sam Bankman-Fried as its most prominent adherent and donor, and it is still recovering from his campaign-finance scandal and criminal conviction. Mr. Moskovitz is now one of the country's largest Democratic donors, but a representative for him said neither he nor his groups were planning to be involved with the new super PACs.
[2]
Marc Andreessen-Backed Super-PAC Pours Millions Into Fighting State AI Regulations
A pro-AI super PAC is investing millions into creating an AI-friendly regulatory environment in the United States. Called "Leading The Future," the super PAC is backed by venture capital firm Andreessen Horowitz, OpenAI president Greg Brockman, Palantir co-founder Joe Lonsdale, and the AI search engine company Perplexity. It launched in August, reportedly armed with more than $100 million to ensure a pro-AI win across the country in the 2026 midterm elections. According to the Wall Street Journal, the super PAC is emboldened by the success of crypto super PAC Fairshake, which counted significant pro-crypto wins in the 2024 presidential and local elections. The PAC's first target is New York state assembly member Alex Bores, who is running for a spot in Congress in the Democratic primaries (though the super PAC's efforts are bipartisan, much to the chagrin of the White House). Bores is the co-sponsor of the Responsible AI Safety and Education (RAISE) Act, a landmark piece of state-level AI safety legislation that has passed all votes and is waiting for the approval of Governor Kathy Hochul. With a year to go until the midterms, the super PAC has found another target: the state regulations, like RAISE that are giving the AI industry a tough time. Leading The Future launched a $10 million campaign on Monday, pushing Washington to adopt "a uniform national approach to AI," the executive director of the PAC's advocacy arm, Nathan Leamer, told CNBC. The advocacy offshoot led by Leamer is a non-profit called Build American AI, and it's entirely dedicated to this goal, with CNBC reporting that the group will run TV, digital, and social media ads to campaign for its legislative agenda. That uniform national approach they are campaigning for will likely override some of the stricter regulations that have been proposed on the state level. In the absence of any federal regulation governing AI, states like New York and California have taken matters into their own hands with regulations that require AI companies to adopt safety measures. Some in the industry see this as stifling innovation. It's not just Silicon Valley, plenty in Washington are unhappy with this as well. That crowd includes a lot of Republican legislators and President Donald Trump. Republicans have revived calls for a moratorium on state AI laws. A previous attempt to add a similar moratorium to the Big Beautiful Bill fell through in the eleventh hour due to bipartisan backlash. There are several Republicans who support child safety laws regarding AI, and a complete moratorium could jeopardize those pieces of legislation as well. The moratorium is expected to either be a standalone bill or be added to a must-pass bill like the National Defense Authorization Act, which will be voted on next month. Trump already expressed his support on his Truth Social account last week, saying that the United States "MUST have one Federal Standard instead of a patchwork of 50 State Regulatory Regimes.†Trump might be planning an executive order to take care of that. Last week, WIRED obtained a draft executive order that would create an "AI Litigation Task Force," which would sue states over AI laws that are deemed to violate federal law governing free speech, interstate commerce, and more. Although reports said Trump could sign this executive order by the end of the week, he hasn't done that yet. Instead, on Monday, he signed another expansive AI-related executive order. Called "The Genesis Mission," that executive order lays out a plan to use AI to turbocharge the government's efforts to solve 20 core science "challenges" that are still to be determined. There's a theme of centralization running through that, too, as the order charges Energy Secretary Chris Wright with "ensuring that all DOE resources used for elements of the Mission are integrated into a secure, unified platform."
Share
Share
Copy Link
Two opposing super PACs are mobilizing over $150 million combined to influence AI regulation policy in the 2026 midterm elections, with one group backed by Anthropic employees pushing for stronger AI safety measures while another supported by Andreessen Horowitz and OpenAI executives fights against restrictive regulations.
The artificial intelligence industry is bracing for an unprecedented political battle as two opposing super PACs prepare to deploy over $150 million combined in the 2026 midterm elections, marking the first major electoral confrontation over AI regulation policy in the United States.
At the center of this emerging conflict is a fundamental disagreement about how strictly the AI industry should be regulated. On one side stands Leading the Future, a super PAC backed by venture capital giant Andreessen Horowitz and OpenAI co-founder Greg Brockman's family, which has already raised $100 million to oppose stringent AI regulations
1
.
Source: Gizmodo
On the other side, a new network of super PACs led by former Democratic congressman Brad Carson aims to raise $50 million to support candidates who prioritize AI safety measures
1
.The push for pro-regulation super PACs has gained momentum among employees at Anthropic, OpenAI's main startup rival, which was founded by former OpenAI employees who believed their previous company was "insufficiently serious about protecting society from the dangers of A.I."
1
. Valued at $183 billion, Anthropic has positioned itself as a more safety-conscious alternative in the AI landscape.Brad Carson, the former Oklahoma congressman spearheading the pro-regulation effort, confirmed plans to establish a network funded through a 501(c)(4) group called Public First, which will support both Democratic and Republican candidates through separate PACs: Jobs and Democracy PAC and Defending Our Values PAC
1
. Carson expressed confidence that "$50 million and 85 percent of the people on your side is more than enough to defeat $100 million, or even $200 million for that matter"1
.
Source: The New York Times
Leading the Future has already identified its first target: Alex Bores, a Democratic candidate running for a U.S. House seat in New York who has championed AI safety legislation
1
. Bores co-sponsored the Responsible AI Safety and Education (RAISE) Act, landmark state-level AI safety legislation that has passed all votes and awaits Governor Kathy Hochul's approval2
.The super PAC's strategy extends beyond individual candidates to challenging state-level regulations directly. Leading the Future launched a $10 million campaign on Monday through its advocacy arm, Build American AI, pushing for "a uniform national approach to AI" that would likely override stricter state-level regulations
2
.Related Stories
The political battle reflects broader tensions between federal and state approaches to AI governance. In the absence of comprehensive federal AI regulation, states like New York and California have implemented their own safety requirements, which some industry players view as stifling innovation
2
.President Trump has expressed support for federal preemption, posting on Truth Social that the United States "MUST have one Federal Standard instead of a patchwork of 50 State Regulatory Regimes"
2
. Republicans have revived calls for a moratorium on state AI laws, though previous attempts faced bipartisan resistance due to concerns about child safety legislation2
.Summarized by
Navi
[1]
26 Aug 2025•Policy and Regulation

17 Nov 2025•Policy and Regulation

23 Sept 2025•Policy and Regulation

1
Technology

2
Technology

3
Science and Research
