Curated by THEOUTPOST
On Fri, 14 Mar, 8:02 AM UTC
6 Sources
[1]
What does OpenAI really want from Trump?
Tina Nguyen is a senior reporter for The Verge, covering the Trump administration, Elon Musk's takeover of the federal government, and the tech industry's embrace of the MAGA movement. When AI giant OpenAI submitted its "freedom-focused" policy proposal to the White House's AI Action Plan last Thursday, it gave the Trump administration an industry wishlist: use trade laws to export American AI dominance against the looming threat of China, loosen copyright restrictions for training data (also to fight China), invest untold billions in AI infrastructure (again: China), and stop states from smothering it with hundreds of new laws. But specifically, one law: SB 1047, California's sweeping, controversial, and for now, defeated AI safety bill. Hundreds of AI-related bills are flooding state governments nationwide, and hundreds or even thousands are expected by the end of 2025, a regulatory deluge the AI industry is trying to dodge. Broad safety regulations like SB 1047 are particularly threatening, posing a perhaps existential threat to OpenAI. Strikingly absent, however, is any notion of how AI should be governed -- or whether it should be governed at all. "OpenAI's proposal for preemption, just to be totally candid with you, it confuses me a little bit, as someone who's thought about this issue a lot," Dean Ball of the libertarian Mercatus Center told me. While federal preemption -- the legal concept that federal laws supersede contradicting state laws -- is a longstanding practice in several fields, OpenAI's proposal notably does not include a national framework to replace any AI laws the states are working on. OpenAI did not respond to a request for comment on the plan. OpenAI's argument against state regulation reflects a common perception that 50 states enforcing 50 versions of AI law would create chaos for citizens and AI companies alike. But with a lack of federal action and an explosion of AI impact on people's everyday lives, state-level proposals are skyrocketing. As of today, less than 80 days into 2025, there are 893 AI-related bills up for consideration in 48 states, according to government relations firm MultiState.ai -- compared to 734 submitted in all of 2024. The bills address a broad range of concerns, from political deception and AI-generated nonconsensual intimate imagery to the use of AI tools in handling utilities during natural disasters. Some bills proscribe using AI tools under certain circumstances, while others -- like a bill that encourages AI-powered searches for objectionable content in school libraries -- promote their use. "This is not just unprecedented for technology policy. I believe this is unprecedented for policy, period, full stop," said Adam Thierer, a senior policy fellow on technology and innovation at the right-leaning R Street Institute. "Name me another area where you've witnessed, within the first 70 days of any given calendar year's legislative cycle, 800 bills introduced. I've just never heard of such a thing." But the fact that OpenAI targeted one specific law paints a picture of what it doesn't want: a law that holds large-scale frontier model developers liable for damages and implements whistleblower protections, especially if it's in a state where it doesn't have enough political backing. SB 1047, which was passed and then vetoed by California Governor Gavin Newsom in 2024, would have hit the nascent AI industry the hardest. The bill would have placed strict security restrictions and legal liability on frontier labs developing new AI models that conducted business in the state -- which, essentially, meant every AI lab in the country would be forced to comply. Though its supporters and opponents hardly fell along ideological lines -- even competitors like Anthropic ended up backing the proposal -- OpenAI publicly opposed the bill, arguing in a letter to the California State Legislature that it would hamper the industry's growth and global leadership. Federal laws preempting state ones is standard practice -- except that, in this case, there are effectively no federal laws, even ones addressing less existential crises. Two bipartisan AI-related bills in Congress gained momentum but ended up not being passed: the No Fakes Act, which would have criminalized generative AI replicas of individuals, and a bill that would have codified the AI Safety Institute that then-President Joe Biden had created via executive order. But with Trump rolling back that order, little is currently in place -- at the nationwide level, at least -- to rein in AI developers. For now, OpenAI and the tech industry at large are relying on the threat of China to push their goals forward. In a post-DeepSeek world, they're leaning heavily on the potential of national security threats and the loss of America's AI superpower status, inflected with Trumpian nationalism. "The Trump Administration's‬†new AI Action Plan can ensure that American-led AI built on democratic principles‬†continues to prevail over CCP-built autocratic, authoritarian AI," OpenAI writes in its comments. "I think the frontier labs, and approximately everyone trying to do any policy entrepreneurship in Washington, DC, has the sense that China is a good way to get people to pay attention," Ball observed.
[2]
OpenAI urges Trump administration to focus 'AI Action Plan' on speed, light regulation
US President Trump gestures as CEO of Open AI Sam Altman speaks in the Roosevelt Room at the White House on January 21, 2025, in Washington, DC. After President Trump, in one of his initial actions upon returning to the White House, revoked the country's first-ever artificial intelligence executive order, OpenAI got to work making sure it would have a seat at the table when it comes to developing and regulating the nascent technology. On Thursday, OpenAI submitted its proposal to the U.S. government, emphasizing the need for speed in AI advancement and a light hand from regulators while highlighting its take on the dangers of AI technology coming out of China. The proposal underscores OpenAI's direct effort to influence the government's coming "AI Action Plan," a tech strategy report to be drafted by the Office of Science and Technology Policy and submitted to President Trump by July. In January, President Trump threw out the AI executive order signed by President Biden in October 2023, which was titled "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." President Trump subsequently issued a new executive order, declaring that "it is the policy of the United States to sustain and enhance America's global AI dominance." He mandated that an AI Action Plan be submitted to the President within 180 days. OpenAI, which as of last month was reportedly close to finalizing a $40 billion investment from SoftBank at a $260 billion valuation, is in a precarious position with Trump's second White House. While the company was part of Trump's Stargate announcement and the billions of dollars of AI infrastructure investment tied to the plan, OpenAI is in a heated legal and public relations battle with Elon Musk, who owns a rival AI startup and is one of Trump's top advisors.
[3]
Trump's call for AI deregulation gets strong backing from Big Tech
Washington (AFP) - Major tech firms are pushing the administration of President Donald Trump to loosen rules on building artificial intelligence, arguing it is the only way to maintain a US edge and compete with China. Spooked by generative AI's sudden advance, governments initially scrambled to develop guardrails, as major tech companies rapidly integrated the technology into their products. Since taking office in January, the Trump administration has shifted focus toward accelerating AI development at all costs, pushing aside concerns about the models suffering hallucinations, producing deepfakes, or destroying human jobs. "The AI future is not going to be won by hand-wringing about safety," Vice President JD Vance told world leaders at a recent AI summit in Paris. This message unsettled international partners, particularly Europe, which had proudly established the EU AI Act as a new standard for keeping the technology in check. But, faced with America's new direction, European officials are now pivoting their messaging toward investment and innovation rather than safety. "We're going to see a significant pullback in terms of the regulatory efforts... worldwide," explained David Danks, professor of data science and philosophy at University of California San Diego. "That certainly has been signaled here in the United States, but we're also seeing it in Europe." 'Step back' Tech companies are capitalizing on this regulatory retreat, seeking the freedom to develop AI technologies that they claim have been too constrained under the Biden administration. One of Trump's first executive actions was dismantling Biden's policies, which had proposed modest guardrails for powerful AI models and directed agencies to prepare to oversee the change. "It's clear that we're taking a step back from that idea that there's going to be a coherent overall approach to AI regulation," noted Karen Silverman, CEO of AI advisory firm Cantellus Group. The Trump administration has invited industry leaders to share their policy vision, emphasizing that the US must maintain its position as the "undeniable leader in AI technology" with minimal investor constraints. The industry submissions will shape the White House's AI action plan, expected this summer. The request has yielded predictable responses from major players, with a common theme emerging: China represents an existential threat which can only be addressed by plowing an open path for companies unencumbered by regulation. OpenAI's submission probably goes the furthest in its contrast with China, highlighting DeepSeek, a Chinese-developed generative AI model created at a fraction of American development costs, to emphasize the competitive threat. According to OpenAI, American AI development should be "protected from both autocratic powers that would take people's freedoms away, and layers of laws and bureaucracy that would prevent our realizing them." For AI analyst Zvi Mowshowitz, OpenAI's "goal is to have the federal government not only not regulate AI," but also ban individual US states from doing so. Currently engaged in litigation with the New York Times over the use of its content for training, OpenAI also argues that restricting access to online data would concede the AI race to China. "Without fair use access to copyrighted material...America loses, as does the success of democratic AI," OpenAI said. Another response submitted by a group of Hollywood celebrities -- including Ben Stiller and Cynthia Erivo -- rejected the notion, reflecting the film and television industry's contentious relationship with the technology. 'Essential' In its response, Meta touted its open Llama AI model as part of the fight for American technological superiority. "Open source models are essential for the US to win the AI race against China and ensure American AI dominance," the company stated. CEO Mark Zuckerberg has even advocated for retaliatory tariffs against European regulatory efforts. Google's input focused on infrastructure investment for AI's substantial energy requirements. Like its peers, Google also opposes state-by-state regulations in the US that it claims would undermine America's technological leadership. Despite the push for minimal oversight, industry observers caution that generative AI carries inherent risks, with or without government regulation. "Bad press is universal, and if your technology leads to really bad outcomes, you're going to get raked over the public relations coals," warned Danks. Companies have no choice but to mitigate the dangers, he added.
[4]
OpenAI calls on Trump to eliminate restrictions on the AI industry - SiliconANGLE
OpenAI calls on Trump to eliminate restrictions on the AI industry OpenAI has submitted a lengthy proposal to the U.S. government, aiming to influence its upcoming AI Action Plan, a strategy report that many believe will guide President Donald Trump's policy on artificial intelligence technology. The proposal from America's most recognizable AI company is predictably controversial, calling for the U.S. government to emphasize speed of development over regulatory scrutiny, while also warning of the dangers posed by Chinese AI firms to the country. Trump called for the AI Action Plan to be drafted by the Office of Science and Technology Policy and submitted to him by July shortly after taking up his second residence in the White House. That happened in January, when he threw out an executive order pertaining to AI that was signed by his predecessor Joe Biden in October 2023, replacing it with his own, declaring that "it is the policy of the United States to sustain and enhance America's global AI dominance." OpenAI has wasted little time in trying to influence the recommendations in that plan, and in its proposal it made clear its feelings on the current level of regulation in the AI industry. It called for AI developers to be given "the freedom to innovate in the national interest", and advocated for a "voluntary partnership between the federal government and the private sector", instead of "overly burdensome state laws". It argues that the federal government should be allowed to work with AI companies on a "purely voluntary and optional basis", saying that this will help to promote innovation and adoption of the technology. Moreover, it called for the U.S. to create an "export control strategy" covering U.S.-made AI systems, which will promote the global adoption of its homegrown AI technology. The company further argues in its recommendations that the government give federal agencies greater freedom to "test and experiment" AI technologies using "real data", and also asked for Trump to grant a temporary waiver that would negate the need for AI providers to be certified under the Federal Risk and Authorization Management Program. It called on Trump to "modernize" the process that AI companies must go through to be approved for federal government use, asking for the creation of a "faster, criteria-based path for approval of AI tools." OpenAI argues that its recommendations will make it possible for new AI systems to be used by federal government agencies up to 12 months faster than is currently possible. However, some industry experts have raised concerns that such speedy adoption of AI by the government could create security and privacy problems. Pushing harder, OpenAI also told the U.S. government it should partner more closely with private sector companies in order to build AI systems for national security use. It explained that the government could benefit from having its own AI models that are trained on classified datasets, as these could be "fine-tuned to be exceptional at national security tasks". OpenAI has a big interest in opening up the federal government sector for AI products and services, having launched a specialized version of ChatGPT, called ChatGPT Gov, in January. It's designed to be run by government agencies in their own secure computing environments, where they have more control over security and privacy. Aside from promoting government use of AI, OpenAI also wants the U.S. government to make its own life easier by implementing a "copyright strategy that promotes the freedom to learn". It asked for Trump to develop regulations that will preserve the ability of American AI models to learn from copyrighted materials. "America has so many AI startups, attracts so much investment, and has made so many research breakthroughs largely because the fair use doctrine promotes AI development," the company stated. It's a controversial request, because the company is currently battling multiple news organizations, musicians and authors over copyright infringement claims. The original ChatGPT that launched in late 2022 and the more powerful models that have since been released are all largely trained on the public internet, which is the main source of their knowledge. However, critics of the company say it is basically just plagiarizing content from news websites, of which many are paywalled. OpenAI has been hit with lawsuits by The New York Times, the Chicago Tribune, the New York Daily News and the Center for Investigative Reporting, the nation's oldest nonprofit newsroom. Numerous artists and authors have also taken legal action against the company. OpenAI's recommendations took aim at some of the company's rivals too, notably DeepSeek Ltd., the Chinese AI lab that developed the DeepSeek R-1 model at a fraction of the cost of anything OpenAI has developed. The company described DeepSeek as being "state-subsidized" and "state-controlled", and asked the government to consider banning its models and those from other Chinese AI firms. In the proposal, OpenAI claimed that DeepSeek's R1 model is "insecure", because it is required by Chinese law to comply with certain demands regarding user data. By banning the use of models from China and other "Tier 1" countries, the U.S. would be able to minimize the "risk of IP theft" and other dangers, it said. "While America maintains a lead on AI today, DeepSeek shows that our lead is not wide and is narrowing," OpenAI said.
[5]
AI industry sends wishlist to Trump: 4 takeaways
A variety of artificial intelligence (AI) firms and industry groups are hoping to shape the Trump administration's forthcoming policy on the emerging technology and keep the U.S. a leader in the space. While the recommendations come from a variety of industry players, the proposals largely overlap and offer a glimpse into how the industry envisions its future under President Trump. The White House set a Saturday deadline for comments on its "AI Action Plan". The feedback, which it states will influence its future policy, will likely be made public in the days following the deadline. Here are four takeaways from the recommendations: Need for a federal framework, but not overdoing regulation Multiple companies and groups called for a clearer regulatory framework, but strongly argued against any policies they believe will hamper AI innovation. OpenAI, in its 15-page response to the White House, called for a regulatory strategy that also gives them the "freedom to innovate." The popular ChatGPT maker suggested a "holistic approach" involving voluntary partnerships between the federal government and private sector, while giving private companies exemption from the hundreds of AI-related bills introduced at the state level. There are already multiple partnerships between the government and AI firms like OpenAI, though it is unclear if they will last under any cuts to the Commerce Department and its AI Safety Institute. The AI industry has long called for regulatory clarity at the federal level, though debate over these rules have stalled most measures from passing Congress. States have taken the issue into their own hands, resulting in patchwork of regulations across the country that firms often argue are too difficult to comply with. "This patchwork of regulations risks bogging down innovation and, in the case of AI, undermining America's leadership position," OpenAI wrote. Concerns about overregulation are also felt among "middle tech" companies, which fear it could interfere with their prospects given their limited resources. Internet Works, the association representing companies like Roblox, Pinterest, Discord and Reddit, is advocating for flexibility in any regulation that comes down the pipeline. Regulation should "be scaled to the size and operational capacity of all participants to prevent smaller enterprises and Middle Tech companies from being disproportionately impacted," the association wrote in its proposal first shared with The Hill. The regulation should be risk-based, Internet Works argued, giving stricter oversight only when there is an increased risk of harm to users. Consumer Technology Association (CTA), a standards and technology trade organization, also pushed for federal primacy, with CTA senior vice president of government affairs Michael Petricone calling state-by-state AI regulations a "compliance nightmare." CTA suggested these standards should be voluntary and industry-led to avoid crushing startups. For his part, Trump has signaled a scaling back of regulations that may appeal in part to some of these concerns. During his first week in office, Trump signed an executive order revoking past government policies that he said acted as "barriers to American AI innovation." Vice President Vance doubled down on this sentiment last month, when he slammed "excessive regulation" at the Paris AI Summit. Strengthening export controls amid foreign competition The need for strengthened export controls was a common request among some major AI firms, signaling an increased concern among the industry over foreign competition. Anthropic pushed for hardened export controls specifically on semiconductors and semiconductor tooling and pointed to the Trump administration's first term restrictions as an effective approach. Meanwhile, OpenAI's proposal for export controls placed a heavy focus on China, a similar concern of the Trump administration. "A comprehensive export control strategy should do more than restrict the flow of AI technologies to the PRC -- it should ensure that America is 'winning diffusion', i.e., that as much of the world as possible is aligned to democratic values and building on democratic infrastructure," OpenAI wrote. Tightened chip exports were a key focus for the former Biden administration, which announced an AI Diffusion Rule in its final days of office earlier this year. The rule placed caps on chip sales to most countries around the world, except for 18 U.S. allies and partners. OpenAI proposed various changes to the AI diffusion rule, including a more aggressive banning of China or nations aligned with the Chinese Community Party (CCP) from access to "democratic AI systems." It comes nearly two months after the surge of Chinese AI startup DeepSeek, which took the internet and stock markets by storm in January after claiming to build a competitive model without U.S. chips at the fraction of the cost it takes AI firms to build large language models. OpenAI CEO and co-founder Sam Altman has largely shrugged off DeepSeek as a real threat and the company proposal called on the government to ban startup's models. Google, the maker of the Gemini AI chatbot, approached the subject with a different tone, stating export controls can play a role in national security but only when "carefully crafted." The company criticized the Biden administration's AI export rules as "counterproductive" potentially "undermin[ing] economic competitiveness." Government adoption of AI As the government looks to create policy on AI, industry players hope it will incorporate the tools in federal agencies' own work. Google and OpenAI both suggested the government "lead by example" in AI adoption and deployment. This may include using AI for streamlining purposes and modernizing agencies' technologies to keep up with foreign governments. AI firms have increasingly made efforts to have their technology incorporated in the government. In January, OpenAI launched a new version of ChatGPT model specifically made for government agencies and workers. And last month, scientists with the Energy Department gathered to evaluate models from Anthropic, OpenAI and other firms for science and national security purposes. Anthropic encouraged further model testing of this nature, which could involve standardized frameworks, secure testing equipment and expert teams to point out risks or threats. More money for AI infrastructure The Trump administration made clear from day two it believes AI infrastructure development is crucial to the advancement of AI. Trump, joined by OpenAI CEO Sam Altman and other industry figures on his second day back in office, announced an up to $500 billion investment in building AI infrastructure in the U.S. The project, called Stargate, will "keep" the technology in this country, Trump said at the time, referencing China as a competitor. AI firms seem to be in agreement, especially when it comes to infrastructure that will help meet the unprecedented energy demands required to build and maintain AI tools. Anthropic floated allocating existing federal funding towards energy infrastructure projects, while Google said the U.S. government should pursue policies with the availability of energy in mind. "A potential lack of new energy supply is the core constraint to expanding AI infrastructure in the near term. Both training and inference computational needs for AI are growing rapidly," Google wrote in its proposal. According to a Department of Energy (DOE) report late last year, the energy demand for U.S. data centers tripled over the past 10 years and is expected to double or triple by 2028. Data centers are also projected to consume between more than six to 12 percent of the U.S.'s electricity by 2028, according to the report.
[6]
Sam Altman's OpenAI Urges Trump Administration For Rapid AI Advancement And Reduced Regulation - Alphabet (NASDAQ:GOOG), Alphabet (NASDAQ:GOOGL)
The Sam Altman-led OpenAI has put forth a proposal to the Trump administration, urging for a rapid acceleration in AI development and a reduction in regulation. What Happened: OpenAI is making a bid for a significant role in the development and regulation of AI technology ahead of the submission of the "AI Action Plan" to President Donald Trump in July. The company presented a proposal that underscores the urgency of swift AI development and a lenient regulatory approach, while also spotlighting potential threats from AI technology originating from China, reported by CNBC on Thursday. The proposal from OpenAI advocates for "the freedom to innovate in the national interest" and a "voluntary partnership between the federal government and the private sector" as opposed to "overly burdensome state laws." The AI company also proposed an "export control strategy" for AI developed in the U.S. and the encouragement of worldwide adoption of American AI systems. OpenAI also urged the government to partner with the private sector in advancing AI for national security purposes. The company recently introduced ChatGPT Gov, a product tailored specifically for use by the U.S. government. The ChatGPT maker also advocated for "a copyright approach that supports the freedom to learn" and for "protecting the ability of American AI models to learn from copyrighted content." SEE ALSO: Rocket Lab Launches Next-Gen Software Platforms For Space Missions Why It Matters: This proposal from OpenAI comes at a time when concerns about China's potential dominance in the AI sector are growing. Earlier in March, former Google GOOG GOOGL CEO Eric Schmidt warned the Trump administration of nuclear-level risks in the global Superintelligent AI race. He also cautioned about the West's need to prioritize a combination of open and closed-source AI models to prevent China from taking the lead. Similarly, in February, Microsoft Corporation MSFT wrote to the Trump administration to alter the Biden-era "AI Diffusion Rule," which might inadvertently strengthen China's rapidly expanding AI sector, as the rule deterred U.S. companies from building data centers in many ally nations of the U.S. classifying them as Tier Two. READ MORE: Apple Stock Takes A Bite Out Of Investors As Siri Fumbles AI Race Image via Shutterstock Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. GOOGAlphabet Inc $168.10-0.53% Stock Score Locked: Want to See it? Benzinga Rankings give you vital metrics on any stock - anytime. Reveal Full Score Edge Rankings Momentum83.41 Growth63.05 Quality89.46 Value50.41 Price Trend Short Medium Long Overview GOOGLAlphabet Inc $166.05-0.63% MSFTMicrosoft Corp $382.24-0.27% Market News and Data brought to you by Benzinga APIs
Share
Share
Copy Link
OpenAI submits a proposal to the Trump administration's AI Action Plan, advocating for minimal regulation, federal preemption of state laws, and a focus on competing with China in AI development.
OpenAI, one of America's leading AI companies, has submitted a comprehensive proposal to the U.S. government, aiming to influence the upcoming "AI Action Plan" mandated by President Donald Trump 14. This proposal, which emphasizes speed in AI development and minimal regulation, comes in response to Trump's executive order declaring it "the policy of the United States to sustain and enhance America's global AI dominance" 2.
Light Regulation: OpenAI advocates for a "voluntary partnership between the federal government and the private sector" instead of "overly burdensome state laws" 4. The company argues for giving AI developers "the freedom to innovate in the national interest" 4.
Federal Preemption: The proposal calls for federal laws to supersede state-level AI regulations, potentially nullifying hundreds of AI-related bills currently under consideration in various states 14.
Copyright Strategy: OpenAI requests regulations that preserve the ability of American AI models to learn from copyrighted materials, citing the fair use doctrine as crucial for AI development 4.
Export Controls: The company proposes an "export control strategy" to promote global adoption of U.S.-made AI technology while restricting access to "democratic AI systems" for China and its allies 45.
Government Adoption: OpenAI suggests faster approval processes for AI tools in federal agencies and partnerships with the private sector to build AI systems for national security purposes 45.
A significant portion of OpenAI's proposal focuses on the perceived threat from Chinese AI development:
OpenAI's proposal aligns with a broader industry push for minimal oversight and a focus on competing with China:
The proposal has sparked debates about the balance between innovation and regulation in the AI sector:
As the Trump administration prepares its AI Action Plan, the tech industry's input, exemplified by OpenAI's proposal, is likely to play a significant role in shaping U.S. AI policy for the foreseeable future 25.
Reference
[1]
[5]
Major tech companies are lobbying the Trump administration for fewer AI regulations, reversing their previous stance on government oversight. This shift comes as Trump prioritizes AI development to compete with China.
5 Sources
5 Sources
President Donald Trump signs a new executive order on AI, rescinding Biden-era policies and calling for AI development free from 'ideological bias'. The move sparks debate on innovation versus safety in AI advancement.
44 Sources
44 Sources
OpenAI releases a comprehensive plan urging the US government to prioritize AI funding, regulation, and infrastructure to maintain global leadership in artificial intelligence development.
12 Sources
12 Sources
The Trump administration revokes Biden's AI executive order, signaling a major shift towards deregulation and market-driven AI development in the US. This move raises concerns about safety, ethics, and international cooperation in AI governance.
4 Sources
4 Sources
OpenAI submits policy proposals to the White House, advocating for federal preemption of state AI laws in exchange for voluntary access to AI models, aiming to maintain US competitiveness in AI development.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved