3 Sources
3 Sources
[1]
OpenAI Advocates Electric Grid, Safety Net Spending for New AI Era
The goal of the proposals is to serve as a "starting point" for a wider discussion "to ensure that AI benefits everyone," according to OpenAI. OpenAI has released a set of policy recommendations meant to help navigate an era of artificial intelligence-fueled upheaval -- including suggesting the creation of a public wealth fund, fast-response social safety net programs and speedier electrical grid development. In a document released Monday titled "Industrial Policy for the Intelligence Age: Ideas to Keep People First," OpenAI proposed a range of policies related to AI "superintelligence" -- often referred to as software that can outperform humans at all kinds of tasks, but which does not currently exist. Many of the proposals are tied to social change driven by AI, which some fear could lead to widespread job losses. The company advocates for a public wealth fund that will distribute cash to citizens, giving them "a stake in AI-driven economic growth." It proposes finding a way to let people share in efficiency gains driven by AI -- including by incentivizing employers to experiment with four-day work weeks, as long as workers' output doesn't fall. And it suggests actively measuring how AI affects wages and unemployment -- and then, once "these metrics exceed pre-defined thresholds," offering workers increased social assistance like unemployment benefits or job training. The goal of the proposals, the company wrote, is to serve as a "starting point" for a wider discussion "to ensure that AI benefits everyone." In an interview, OpenAI's chief global affairs officer Chris Lehane said the policy conversations around AI need to be "as transformative" as the technology itself. Founded in 2015, OpenAI kicked off the current boom in generative AI in late 2022 with the release of ChatGPT, which remains its most well-known product. Originally built as a nonprofit dedicated to advancing AI to benefit humanity, the startup has since restructured into a more traditional for-profit company. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Plus Signed UpPlus Sign UpPlus Sign Up By continuing, I agree to the Privacy Policy and Terms of Service. OpenAI has said for years that it's working to build what's often referred to as artificial general intelligence, or AGI -- essentially, computers that can do most tasks as well as people. More recently, the company and some of its rivals have discussed plans for more powerful software, or superintelligence. In its latest document, OpenAI defined superintelligence as "AI systems capable of outperforming the smartest humans even when they are assisted by AI." While OpenAI's ChatGPT is used by more than 900 million people globally each week, many in the US have negative feelings about AI generally, driven in large part by concerns about job displacement as well as power-hungry data centers. Companies like OpenAI and Anthropic, which are at the forefront of AI advancement, have sought to educate the public and policymakers about the potential changes wrought by AI. That's included a range of work on communication, including last week, when OpenAI bought the tech talk show TBPN. "It is simply not good enough to wave your hands and say, 'Here's all the things that are going to happen and then not actually come up with solutions,'" Lehane said.
[2]
OpenAI calls for robot taxes, a public wealth fund, and a four-day week
Sam Altman's 13-page policy blueprint, 'Industrial Policy for the Intelligence Age,' proposes auto-triggering safety nets, containment playbooks for rogue AI, and direct citizen dividends from AI-driven growth. He told Axios it is a starting point, not a prescription. OpenAI has published a 13-page policy document calling for sweeping economic reforms to prepare for what it describes as approaching superintelligence, including taxes on automated labour, a national public wealth fund seeded partly by AI companies, and pilots of a 32-hour working week. The document, titled 'Industrial Policy for the Intelligence Age: Ideas to keep people first,' was released as Congress prepares to debate AI legislation. CEO Sam Altman told Axios in an exclusive interview that the scale of change coming from AI is comparable to the Progressive Era and the New Deal, and that the two most immediate dangers are cyberattacks and biological weapons capable of being enabled by advanced AI. The most radical proposal in the document is the public wealth fund. OpenAI suggests the government create a nationally managed fund, seeded in part by contributions from AI companies themselves, that would invest in AI firms and other businesses adopting the technology and distribute returns directly to American citizens. The model is comparable to Alaska's Permanent Fund, which pays annual dividends to state residents from oil revenues. On labour, the document floats taxes on automated labour and a shift in the tax base from payroll towards capital gains and corporate income, an acknowledgement that AI could hollow out the wage-and-payroll revenue that currently funds Social Security. The 32-hour workweek proposal is framed as an 'efficiency dividend' from AI-driven productivity gains. The document includes a section on what it calls 'containment playbooks' for scenarios in which dangerous AI systems become autonomous and capable of replicating themselves. OpenAI acknowledges scenarios where such systems 'cannot be easily recalled,' and proposes government co-ordination as the response. The blueprint also envisions automatic safety net triggers: when AI-driven displacement metrics hit preset thresholds, benefits including unemployment payments and wage insurance would increase automatically, then phase out when conditions stabilise. Altman told Axios that a major cyberattack enabled by near-future AI models is 'totally possible' within the next year, and that AI models being used to create novel pathogens is 'no longer theoretical.' Altman was candid with Axios about the dual nature of the document. OpenAI is the company racing to build the very technology it is warning about, and positioning itself as the responsible actor proposing solutions is plainly also a strategy to shape regulation before regulation shapes it. Anthropic has occupied a similar lane. The policy paper arrives at a moment when OpenAI is preparing for an IPO, has closed a $110 billion private funding round, and is simultaneously under scrutiny over its conversion from non-profit. Whether the altruism is genuine or strategic, Altman told 'Some will be good. Some will be bad. But we do feel a sense of urgency. And we want to see the debate of these issues really start to happen with seriousness.'
[3]
Sam Altman Wants To Talk -- Six Takeaways From His Bold Proposal On AI, Wealth Distribution, Governance - Al
Sam Altman, CEO of OpenAI, released a 13-page document on Monday comparing the shift towards superintelligence to past major technological transitions like electricity or the combustion engine. It is a comprehensive proposal on how governments should tax, regulate, and redistribute wealth from AI technology. Six major insights from Altman's plan: 1-Shared Benefits Altman advocates for a proactive policy similar to the "Progressive Era" or New Deal to ensure AI breakthroughs translate into shared opportunities to benefit a broad spectrum of people, not just a few powerful entities. He proposes principles for an AI-centered industrial policy, including sharing prosperity broadly, mitigating risk and building governance, and democratizing access and agency. 2-AI-Driven Tax, Wealth Fund Through this paper, Altman also outlines initial policy ideas, such as modernizing the tax system. He said policymakers could raise taxes on capital gains, corporate income, and AI-driven profits, or introduce taxes on automation, while offering wage-linked incentives to help firms retain and retrain workers. These measures aim to fund essential programs and support workforce shifts in an AI-driven economy. He also called for creating a Public Wealth Fund. Policymakers and AI companies could collaborate to create a fund investing in AI-driven growth across companies, said Altman. Returns from the fund could be distributed to citizens, letting everyone benefit directly from AI's economic upside. 3-Four-Day Workweek Use AI efficiency gains to boost worker benefits, fund healthcare and retirement, and test shorter workweeks without reducing pay, turning saved hours into permanent time off or a four-day work week. 4-Policy Pilots & Global AI He suggests that policy experiments should be piloted by non-government groups, with successful approaches reinforced by the state through regulation, procurement, and investment. It emphasizes the need for global cooperation as the transition to superintelligence is already underway worldwide. 5-Containing Dangerous AI Societies should create and test plans to contain dangerous AI systems that can't easily be recalled, focusing on limiting their spread, reducing harm, and coordinating responses -- similar to strategies used in cybersecurity and public health. 6-Strengthening Safety Nets Altman urges authorities to ensure safety nets like unemployment insurance, SNAP, Social Security, Medicaid, and Medicare work effectively and at scale. Track AI's impact on jobs and wages in real time, then automatically expand temporary support -- such as cash assistance, wage insurance, or training -- when disruptions exceed set thresholds, scaling back as conditions stabilize. OpenAI presents these ideas as a starting point for a global, inclusive conversation on shaping AI's benefits. Progress will rely on ongoing collaboration, experimentation, and feedback, supported by fellowships, research grants, and discussions at the new OpenAI Workshop. Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Photo Courtesy: Meir Chaimowitz on Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
Share
Share
Copy Link
OpenAI released a 13-page policy blueprint calling for sweeping economic reforms to prepare for approaching superintelligence. The document proposes a public wealth fund seeded by AI companies, taxes on automated labor, and pilots of a 32-hour workweek. CEO Sam Altman warns that major cyberattacks and AI-enabled biological weapons pose immediate dangers.
OpenAI has unveiled a 13-page policy document titled "Industrial Policy for the Intelligence Age: Ideas to Keep People First," proposing sweeping economic reforms to prepare society for what the company describes as approaching superintelligence
1
2
. The document defines superintelligence as AI systems capable of outperforming the smartest humans even when they are assisted by AI1
. Released as Congress prepares to debate AI legislation, the Sam Altman proposal serves as what the CEO calls a "starting point" for wider discussion to ensure AI benefits everyone2
.
Source: Benzinga
The most radical element of the AI policy centers on creating a public wealth fund that would distribute citizen dividends from AI-driven economic growth
3
. OpenAI suggests the government create a nationally managed fund, seeded partly by contributions from AI companies themselves, that would invest in AI firms and other businesses adopting the technology2
. Returns from the fund would be distributed directly to American citizens, giving everyone "a stake in AI-driven economic growth"1
. The model draws comparison to Alaska's Permanent Fund, which pays annual dividends to state residents from oil revenues2
.
Source: Bloomberg
To address concerns about AI-driven job displacement, the document floats robot taxes on automated labor and proposes shifting the tax base from payroll towards capital gains and corporate income
2
. This acknowledges that AI could hollow out the wage-and-payroll revenue that currently funds Social Security2
. Sam Altman told Axios that the scale of change coming from AI is comparable to the Progressive Era and the New Deal2
. Policymakers could raise taxes on AI-driven profits while offering wage-linked incentives to help firms retain and retrain workers, supporting workforce shifts in an AI-driven economy3
.The proposal includes pilots of a four-day workweek, framed as an "efficiency dividend" from productivity gains driven by AI. OpenAI suggests incentivizing employers to experiment with 32-hour work weeks, as long as workers' output doesn't fall
1
. The company proposes using AI efficiency gains to boost worker benefits, fund healthcare and retirement, and test shorter workweeks without reducing pay, turning saved hours into permanent time off3
.OpenAI advocates for fast-response social safety net programs that would activate automatically when AI disrupts labor markets
1
. The blueprint envisions automatic safety net triggers: when AI-driven displacement metrics hit preset thresholds, unemployment benefits, wage insurance, and job training would increase automatically, then phase out when conditions stabilize2
. The document urges authorities to ensure programs like unemployment insurance, SNAP, Social Security, Medicaid, and Medicare work effectively at scale3
. OpenAI suggests actively measuring how AI affects wages and unemployment in real time1
.Related Stories
The document includes "containment playbooks" for scenarios in which dangerous AI systems become autonomous and capable of replicating themselves
2
. OpenAI acknowledges scenarios where such systems "cannot be easily recalled" and proposes government coordination as the response, focusing on limiting their spread, reducing harm, and coordinating responses similar to strategies used in cybersecurity and public health3
. Altman told Axios that a major cyberattack enabled by near-future AI models is "totally possible" within the next year, and that AI models being used to create novel pathogens is "no longer theoretical"2
. The proposals also include speedier electrical grid development to support power-hungry data centers1
.The AI policy emphasizes the need for global cooperation as the transition to superintelligence is already underway worldwide
3
. Altman advocates for proactive governance similar to the Progressive Era to ensure AGI breakthroughs translate into shared opportunities benefiting a broad spectrum of people, not just a few powerful entities3
. The document suggests that policy experiments should be piloted by non-government groups, with successful approaches reinforced by the state through regulation, procurement, and investment3
. Companies like OpenAI and Anthropic, at the forefront of AI advancement, have sought to educate the public and policymakers about potential changes wrought by AI1
. OpenAI's chief global affairs officer Chris Lehane said the policy conversations around AI need to be "as transformative" as the technology itself, adding "It is simply not good enough to wave your hands and say, 'Here's all the things that are going to happen and then not actually come up with solutions'"1
. The policy paper arrives as OpenAI prepares for an IPO, has closed a $110 billion private funding round, and faces scrutiny over its conversion from non-profit2
. While ChatGPT is used by more than 900 million people globally each week, many in the US have negative feelings about AI generally, driven by concerns about job displacement1
.
Source: The Next Web
Summarized by
Navi
10 Feb 2025•Technology

13 Jan 2025•Policy and Regulation

05 Mar 2026•Business and Economy

1
Technology

2
Science and Research

3
Science and Research
