11 Sources
11 Sources
[1]
Newsom Signs Executive Order Requiring AI Companies to Provide Safety, Privacy Guardrails
Expertise I have more than 30 years' experience in journalism in the heart of the Silicon Valley. California Gov. Gavin Newsom on Monday signed an executive order requiring AI companies that do business with the state to institute safety and privacy guidelines. The order (PDF) is designed to ensure that companies contracting with the state adhere to rigorous standards and develop responsible policies to prevent misuse of their technology while protecting consumers' safety and privacy, according to Newsom's office. "California leads in AI, and we're going to use every tool we have to ensure companies protect people's rights, not exploit them or put them in harm's way," Newsom said in a statement. "While others in Washington are designing policy and creating contracts in the shadow of misuse, we're focused on doing this the right way." The executive order comes as the Trump administration maintains that the federal government should be responsible for regulating the AI industry -- and that requiring AI companies to comply with 50 different sets of state laws would prevent the US from "winning" the global AI race. The White House recently released a new policy framework for regulating generative AI that focuses on some of the biggest concerns people have about AI: job loss, copyright chaos for creators, rapidly expanding infrastructure such as data centers and the protection of vulnerable groups like children. But critics say it doesn't go far enough to regulate the fast-growing AI industry. Some states have passed laws making it a crime to create sexual images of people without their consent, while others have placed restrictions on insurance companies using AI to approve or deny health care claims. Companies, including Google, Meta, OpenAI and Andreessen Horowitz, have been calling for national AI standards rather than litigating across all 50 states.
[2]
Newsom orders government to consider AI harm in contract rules
The next time the federal government labels a business a supply-chain risk, as the Department of Defense did last month to San Francisco-based AI tools maker Anthropic, the state of California will review that designation and make its own decision about whether to do business with them. That's according to an executive order signed by Gov. Gavin Newsom on Monday. The order followed a dispute between Anthropic and the Defense Department over contract terms barring the military from using Anthropic systems for domestic mass surveillance and fully autonomous weaponry. By designating Anthropic a supply chain risk, the Department of Defense effectively barred the startup from competing for certain military contracts and subcontracts. A judge recently issued a temporary injunction to block the designation. The broader purpose of Newsom's order was to place guardrails on the use of AI by state employees while at the same time encouraging them to accelerate their use of the technology. Many of the largest AI companies in the world are based in California, and the state also leads the nation in volume of AI regulations. The order requires state agencies to: Those mandates come at a time when more than 20 California departments and agencies are working to develop or use Poppy, a generative AI assistant for state employees, and when half a dozen state agencies are testing AI to do things like assist state employees and help homeless people and businesses. They also come as state courts and city governments are increasing their use of the technology. Newsom's office said President Donald Trump and Republicans in Washington D.C. have rolled back protections or ignored the ways AI can harm people. "Unlike the Trump administration, California remains committed to ensuring that AI solutions adopted and deployed by (California)... cannot be misused by bad actors," the governor's office said in a press release announcing the order. At the federal level, Trump has signed executive orders to discourage states from regulating AI and urged federal agencies to adopt AI to do things like reduce federal regulation and accelerate decisions made about Medicare. The White House introduced an AI policy framework last month that the president wants Congress to take up. That proposal takes a light touch approach to regulation and does not address issues related to bias, discrimination, or civil rights. This is the second executive order signed by Newsom to address artificial intelligence. A 2023 order aimed exclusively at generative AI, the sort that powers systems like ChatGPT and Midjourney, similarly called for more use of AI by state agencies and ordered them to put guardrails in place. Newsom's handling of AI issues is closely watched by both union leaders, who in February pledged that they will not support his run for president without more worker protections from the technology, and big tech donors, who are pouring money into influencing California politics ahead of midterm elections this fall. ___ This story was originally published by CalMatters and distributed through a partnership with The Associated Press.
[3]
What to Know About California's Executive Order on A.I.
Gov. Gavin Newsom of California on Monday issued a first-of-its-kind executive order requiring safety and privacy guardrails from artificial intelligence companies that contract with the state. California has been a leader in tech lawmaking, and was the first state to pass a law mandating safety and transparency from the biggest A.I. companies. Mr. Newsom, a Democrat, signed the order partly as a message to President Trump, who has been trying to bat down state attempts to regulate A.I. Here's what's in his executive order. Contractor Vetting Companies vying for government contracts will first have to explain their safety and privacy policies around A.I. The state will look carefully at policies on how the companies prevent exploitation of individuals, including the spread of child sexual abuse materials. The government will also consider whether A.I. models, the technology that powers chatbots and other tools, are used to monitor individuals or are used to block certain speech. Companies should also explain how they are avoiding bias in their systems. Independence From Federal Contracting Standards If the federal government designates a company a supply chain risk, which the Pentagon has recently done with the A.I. start-up Anthropic, California will independently conduct its own assessment. If the company isn't determined to be a risk, the state may allow the company to continue on as a contractor. This is significant because the Pentagon's legal tussle with Anthropic, which had provided the Defense Department with A.I. technologies for use on classified systems, has exposed a rift in the administration's pursuit of A.I. for war use. The Pentagon terminated its contract with Anthropic after the company said the government couldn't use its models for mass surveillance and the deployment of autonomous weaponry. Watermarking Requirement The governor also called on state officials to begin watermarking A.I.-generated or manipulated videos that they create. The technique is aimed at guarding against the spread of misinformation. It would also allow consumers to tell the difference between human-generated and A.I.-generated images produced by the state.
[4]
California cements its role as the national testing ground for AI rules
Why it matters: California's multi-pronged approach makes it likely that AI companies in the U.S. will treat the state's rules as a de facto national standard, even as the White House moves to rein in state regulation. * It follows a familiar pattern: California acts first, companies adapt to keep doing business there and Congress dithers, eventually ceding its role to states due to gridlock. Driving the news: Gov. Gavin Newsom signed an AI executive order this week as state legislators advance a number of AI bills and consider other regulatory avenues for AI. The big picture: California is moving ahead as the Trump administration pushes for a national AI standard that would pre-empt nearly all state-level AI laws. * The White House last month unveiled its AI legislative framework, essentially a wishlist for an elusive bill from a divided Congress. * Meanwhile, Newsom, a 2028 Democratic presidential contender, is positioning himself as the inverse of President Trump on AI. Still, the state is hardly immune to Big Tech influence even as it manages to pass tech legislation. * OpenAI and Anthropic have been highly involved in pushing various bills and ballot initiatives, often pairing with online safety groups to do so, to mixed results. What they're saying: "California's always been the birthplace of innovation. But we also understand the flip side: in the wrong hands, innovation can be misused in ways that put people at risk," Newsom said in a statement about the executive order. * "While others in Washington are designing policy and creating contracts in the shadow of misuse, we're focused on doing this the right way." * Google and Anthropic declined to comment on the order. OpenAI said in a statement that "we are glad to see Governor Newsom continuing to lead on AI so California can continue to lead the world on AI." * A White House official told Axios that the administration is "proud" of its AI framework and "happy to engage with legislation that is consistent with the framework." How it works: Newsom's AI order aims to "raise the bar for AI companies seeking to do business with the state," per the announcement, and makes procurement standards stronger. * The state will develop a plan for contracting best practices requiring companies to explain their policies on distribution of illegal content, model bias and violation of civil rights and free speech. * In a clear shot at the Pentagon-Anthropic dispute, the order also enables California to "separate the procurement authorization process from the federal government's if needed," per the release. Lawmakers in the California State Assembly and Senate have also introduced a sweeping AI chatbot bill for protecting minors that would build on a chatbot law already in effect. * "While Washington steps back from its responsibility to protect Americans from AI harms, California is stepping up on every front," Assemblymember Rebecca Bauer-Kahan told Axios. * "We can lead the world in AI and still demand that it works for people, not against them." What we're watching: Multiple AI and tech policy sources told Axios Newsom's executive order itself may lack strong legal teeth, but it will end up influencing company policies because they all want to do business with California.
[5]
California to impose new AI regulations in defiance of Trump call
Gavin Newsom signs order to prioritize public safety and rights as president seeks to prevent 'cumbersome' rules California will impose new standards on artificial intelligence companies seeking to do business with the state, defying Donald Trump's demands to keep the controversial industry as deregulated as possible. Democratic governor Gavin Newsom signed an executive order on Monday that gives the state four months to develop AI policies that prioritize public safety. Companies hoping to sign contracts with the state of California will have to show they have policies to keep AI from distributing child sexual abuse material and violent pornography. They will also show how their models avoid incorporating "harmful bias" and detail policies aimed at avoiding "unlawful discrimination, detention, and surveillance". The order directs the state to come up with best practices for watermarking AI-generated or -manipulated images and videos. "California's always been the birthplace of innovation," Newsom wrote in a statement. "But we also understand the flip side: in the wrong hands, innovation can be misused in ways that put people at risk. "California leads in AI, and we're going to use every tool we have to ensure companies protect people's rights, not exploit them or put them in harm's way." California's changes are the latest in a series of state-level attempts to regulate an AI industry that has repeatedly raised public safety concerns and worries that the expensive tech will degrade the value of labor and kill jobs. States have passed more than 100 laws to shield children from chatbots and to block AI companies from pilfering copyright-protected material, according to the New York Times. The White House issued a national policy framework for AI in December that discouraged states from passing such regulations. "To win, United States AI companies must be free to innovate without cumbersome regulation," Trump's executive order announcing the framework reads. "But excessive state regulation thwarts this imperative." Trump's order directed the justice department in December to establish an "AI Litigation Task Force" to challenge state AI regulations.
[6]
California Tightens AI Contract Rules as Fight With Trump Admin Grows - Decrypt
State officials will develop procurement rules addressing issues including bias, misuse, and civil rights risks. The conflict between Washington and the states over artificial intelligence policy escalated Monday after California Governor Gavin Newsom signed an executive order requiring stronger safeguards from AI companies seeking state contracts. According to the order, companies selling AI systems to California agencies will be required to demonstrate policies that prevent misuse and protect privacy, security, and civil rights. "California's always been the birthplace of innovation. But we also understand the flip side: in the wrong hands, innovation can be misused in ways that put people at risk," Newsom said in a statement. "California leads in AI, and we're going to use every tool we have to ensure companies protect people's rights, not exploit them or put them in harm's way. While others in Washington are designing policy and creating contracts in the shadow of misuse, we're focused on doing this the right way." Newsom's order directs the state's Government Operations Agency to develop procurement standards for AI vendors that address issues including illegal content generation, model bias, and risk to civil rights and freedom of speech. The order also directs the California Department of Technology to develop recommendations for watermarking AI-generated images and manipulated video. The order places California in conflict with the Donald Trump administration's effort to establish national AI standards and limit state-level regulation. Earlier this month, the Trump administration released a national artificial intelligence policy framework urging Congress to establish federal standards and reduce what officials describe as a patchwork of state AI regulations. Kevin Frazier, an adjunct research fellow at the Cato Institute, said the dispute reflects a longstanding constitutional balance between state and federal authority. "Every technological breakthrough -- from the steamboat to superintelligence -- raises key questions about how to allocate regulatory authority between the states and the federal government," Frazier told Decrypt. "The Constitution provides a clear answer: the federal government must lead on matters of economic and national security as well as those demanding a uniform response; states can exercise their traditional police powers within their borders." Frazier called Newsom's executive order "a prime example of federalism in action," and said that companies that reject California's requirements can choose not to sell to the state. "Meanwhile, Congress is still in a position to set the terms of the pace and direction of the country's AI ambitions," he said. Quinn Anex-Reis, a senior policy analyst at the Center for Democracy and Technology, said California's size and purchasing power mean it could influence how companies design and test AI systems if they want to sell to the state. "Government contracting is very valuable to companies," Anex-Reis told Decrypt. "It's a huge part of business for technology developers generally, and a growing avenue of business for AI developers specifically." He said procurement rules are one of the most effective ways governments can shape how AI systems are developed and evaluated. "The procurement process is a really important place to pay attention to," Anex-Reis said. "Because that's really the most important place the state can look to set protections and expectations about how vendors develop their tools." Newsom has emerged as a national Democratic figure and potential 2028 presidential candidate. A recent Politico-UC Berkeley Citrin Center poll found him leading former Vice President Kamala Harris by 14 points among likely Democratic primary voters in California. The policy clash over AI regulation places him in direct conflict with the Trump administration as debates intensify over who should set the rules governing the technology. Last summer, the Trump administration ordered federal agencies to avoid contracts with what it called "woke AI" models and to procure systems that demonstrate ideological neutrality. Despite that, Anex-Reis said the question around AI regulation is bigger than politics. "This really shouldn't be a political issue," Anex-Reis said. "This is really about making sure taxpayer dollars aren't wasted and that the tools that our government buys works."
[7]
California imposes new AI regulations on businesses in "first-of-its-kind" executive order signed by Newsom
Cecilio Padilla is a digital producer for CBS Sacramento and a Sacramento-area native who has been covering Northern California for more than a decade. California will require artificial intelligence companies to implement safety and privacy guardrails under a new order from Gov. Gavin Newsom. The executive order, which the governor is calling "first-of-its-kind," was signed Monday. "California's always been the birthplace of innovation. But we also understand the flip side: in the wrong hands innovation can be misused in ways that put people at risk," Newsom said in a statement after signing the order. Newsom said the order comes in response to moves by the Trump administration to limit state regulation of AI in favor of a single nationwide approach, following lobbying by big tech companies. "While others in Washington are designing policy and creating contracts in the shadow of misuse, we're focused on doing this the right way," Newsom stated. Under the order, companies seeking to do business with the state will need to certify their AI systems include safeguards against illegal content, harmful bias and potential civil rights violations. The order also directs state agencies to expand their use of vetted AI tools, with the goal of improving public services. "California leads in AI, and we're going to use every tool we have to ensure companies protect people's rights, not exploit them or put them in harm's way," Newsom stated. Earlier this month, the White House unveiled a national framework outlining how it wants Congress to address AI concerns, favoring a "light-touch" approach to regulation.
[8]
Newsom Orders Government to Consider AI Harm in Contract Rules
The next time the federal government labels a business a supply-chain risk, as the Department of Defense did last month to San Francisco-based AI tools maker Anthropic, the state of California will review that designation and make its own decision about whether to do business with them. That's according to an executive order signed by Gov. Gavin Newsom on Monday. The order followed a dispute between Anthropic and the Defense Department over contract terms barring the military from using Anthropic systems for domestic mass surveillance and fully autonomous weaponry. By designating Anthropic a supply chain risk, the Department of Defense effectively barred the startup from competing for certain military contracts and subcontracts. A judge recently issued a temporary injunction to block the designation. The broader purpose of Newsom's order was to place guardrails on the use of AI by state employees while at the same time encouraging them to accelerate their use of the technology. Many of the largest AI companies in the world are based in California, and the state also leads the nation in volume of AI regulations. The order requires state agencies to: Those mandates come at a time when more than 20 California departments and agencies are working to develop or use Poppy, a generative AI assistant for state employees, and when half a dozen state agencies are testing AI to do things like assist state employees and help homeless people and businesses. They also come as state courts and city governments are increasing their use of the technology. Newsom's office said President Donald Trump and Republicans in Washington D.C. have rolled back protections or ignored the ways AI can harm people. "Unlike the Trump administration, California remains committed to ensuring that AI solutions adopted and deployed by (California)... cannot be misused by bad actors," the governor's office said in a press release announcing the order. At the federal level, Trump has signed executive orders to discourage states from regulating AI and urged federal agencies to adopt AI to do things like reduce federal regulation and accelerate decisions made about Medicare. The White House introduced an AI policy framework last month that the president wants Congress to take up. That proposal takes a light touch approach to regulation and does not address issues related to bias, discrimination, or civil rights. This is the second executive order signed by Newsom to address artificial intelligence. A 2023 order aimed exclusively at generative AI, the sort that powers systems like ChatGPT and Midjourney, similarly called for more use of AI by state agencies and ordered them to put guardrails in place. Newsom's handling of AI issues is closely watched by both union leaders, who in February pledged that they will not support his run for president without more worker protections from the technology, and big tech donors, who are pouring money into influencing California politics ahead of midterm elections this fall. ___ This story was originally published by CalMatters and distributed through a partnership with The Associated Press.
[9]
California AI order requires firms seeking state contracts to have safeguards against abuse - The Economic Times
California Governor Gavin Newsom signed an executive order on Monday that requires firms seeking contracts with the state to provide safeguards against AI misuse, including the generation of illegal content, harmful bias and violations of civil rights. To avoid misinformation, the order requires agencies towatermark images or videos that may be generated through AI asper the guidance issued by the state. If the federal government labels a company as a supplychain risk, California will conduct its own assessment and mayallow it to remain a contractor if it does not find it to be arisk. This order comes after the Pentagon slapped a formalsupply-chain risk designation on artificial intelligence labAnthropic, barring government contractors from using the firm'stechnology in their work for the US military. Within 120 days, California's Department of GeneralServices and Department of Technology will submitrecommendations for new AI‑related vendor certifications whichwould allow firms to attest to responsible AI governance andpublic‑safety protections. The order underlines the state's efforts to strike anindependent stance despite efforts by some Republican lawmakersfor it to defer to federal authorities on law and regulation. In February, California Attorney General Rob Bonta toldReuters in an interview his office is developing its in-houseexpertise through its "AI oversight, accountability andregulation program."
[10]
California Governor Gavin Newsom Cracks Down On AI Misuse With New Order Mandating Safeguards: 'While Tru
On Monday, California tightened oversight of AI, with Gov. Gavin Newsom (D-Calif.) rolling out sweeping new rules aimed at curbing misuse and protecting civil liberties. New AI Rules Target Misuse, Bias, Deepfakes Newsom signed an executive order requiring companies seeking contracts with the state of California to implement safeguards against AI misuse, including the creation of illegal content, harmful bias and civil rights violations. The order also directs state agencies to clearly label AI-generated images and videos with watermarks, a move designed to limit misinformation and the spread of deepfakes. California Takes Independent Stance On Federal AI Risk Labels This development follows the move by the Pentagon to designate Claude-parent Anthropic as a supply-chain risk, effectively barring contractors from using its technology in military-related work. Last week, the company secured a legal victory after U.S. District Judge Rita Lin issued a preliminary injunction preventing the Pentagon from limiting the use of its AI models. The court said the government's classification of Anthropic as a "supply chain risk" was likely unlawful and may have been retaliatory, temporarily pausing the directive linked to the Donald Trump administration. California, however, signaled it may not automatically follow that decision. New Certification Framework For AI Vendors Coming Soon Within 120 days, the state's Department of General Services and Department of Technology are expected to propose a certification framework requiring vendors to attest to responsible AI governance and public safety protections. The move is part of a broader effort to formalize accountability standards for AI firms working with the state. Newsom Draws Contrast With Trump On AI Policy In a statement on X, Newsom highlighted California's focus on privacy and safety. "I just signed an executive order to actively make sure AI companies working with the state protect privacy and civil liberties," he said. "While Trump pressures companies to deploy AI for autonomous weapons and domestic surveillance, California is using our power ... to raise the bar on privacy and security." Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Photo courtesy: Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[11]
California AI order requires firms seeking state contracts to have safeguards against abuse
March 30 (Reuters) - California Governor Gavin Newsom signed an executive order on Monday that requires firms seeking contracts with the state to provide safeguards against AI misuse, including the generation of illegal content, harmful bias and violations of civil rights. * To avoid misinformation, the order requires agencies towatermark images or videos that may be generated through AI asper the guidance issued by the state. * If the federal government labels a company as a supplychain risk, California will conduct its own assessment and mayallow it to remain a contractor if it does not find it to be arisk. * This order comes after the Pentagon slapped a formalsupply-chain risk designation on artificial intelligence labAnthropic, barring government contractors from using the firm'stechnology in their work for the U.S. military. * Within 120 days, California's Department of GeneralServices and Department of Technology will submitrecommendations for new AI-related vendor certifications whichwould allow firms to attest to responsible AI governance andpublic-safety protections. * The order underlines the state's efforts to strike anindependent stance despite efforts by some Republican lawmakersfor it to defer to federal authorities on law and regulation. * In February, California Attorney General Rob Bonta toldReuters in an interview his office is developing its in-houseexpertise through its "AI oversight, accountability andregulation program." (Reporting by Carlos Méndez in Mexico City; Editing by Edwina Gibbs)
Share
Share
Copy Link
Gavin Newsom signed an executive order requiring AI companies doing business with California to implement safety and privacy guardrails. The move positions the state as a national testing ground for AI regulation, directly challenging the Trump administration's push for federal control and minimal state oversight of the AI industry.
Gavin Newsom signed an executive order on Monday requiring AI companies seeking contracts with California to institute comprehensive safety and privacy guardrails
1
. The order represents a direct challenge to the Trump administration's efforts to prevent what it calls "cumbersome" state-level AI regulation5
. "California leads in AI, and we're going to use every tool we have to ensure companies protect people's rights, not exploit them or put them in harm's way," Newsom said in a statement1
. The move cements California's role as the national testing ground for AI rules, with industry observers noting that AI companies will likely treat the state's standards as a de facto national benchmark given California's market dominance4
.
Source: Benzinga
Under the new state contract rules, AI companies vying for government contracts must explain their policies on preventing exploitation of individuals, including blocking the spread of child sexual abuse material and violent pornography
3
5
. The order also requires companies to detail how they address model bias and avoid unlawful discrimination, detention, and surveillance5
. California will examine whether AI models are used to monitor individuals or block certain speech, and companies must demonstrate how they're avoiding bias in their systems3
. The state has four months to develop comprehensive AI policies that prioritize public safety and consumer privacy5
. Additionally, the order calls for watermarking of AI-generated content or manipulated videos created by state officials to guard against misinformation and allow consumers to distinguish between human-generated and AI-generated images3
.In a pointed response to the Pentagon's recent dispute with Anthropic, the executive order enables California to separate its procurement authorization process from federal oversight
2
4
. When the Department of Defense labels a business an AI supply chain risk, as it did with San Francisco-based Anthropic last month, California will now conduct its own independent assessment2
. The Pentagon had designated Anthropic a supply chain risk after the startup refused contract terms that would have allowed the military to use its systems for domestic mass surveillance and fully autonomous weaponry2
. This provision signals California's willingness to chart its own course on tech lawmaking, even when it conflicts with federal oversight decisions.
Source: Axios
Related Stories
The executive order arrives as the White House maintains that federal government should control AI regulation and that requiring AI companies to comply with 50 different sets of state laws would prevent the US from "winning" the global AI race
1
. Donald Trump's administration released a policy framework for regulating generative AI that focuses on job displacement, copyright issues, and protecting vulnerable groups like children, but critics argue it doesn't adequately regulate the fast-growing industry1
. The White House framework takes a light touch approach and notably does not address issues related to model bias, discrimination, or civil rights2
. Trump signed an executive order in December directing the Justice Department to establish an "AI Litigation Task Force" to challenge state AI regulations5
. Companies including Google, Meta, OpenAI and Andreessen Horowitz have called for a national standard for AI rather than navigating regulations across all 50 states1
.Newsom, a 2028 Democratic presidential contender, is positioning himself as the inverse of Trump on AI regulation
4
. "While others in Washington are designing policy and creating contracts in the shadow of misuse, we're focused on doing this the right way," Newsom stated1
. His handling of AI issues is closely watched by union leaders, who pledged in February they won't support his presidential run without stronger worker protections from the technology, and by Silicon Valley donors pouring money into California politics ahead of midterm elections2
. OpenAI responded positively, saying "we are glad to see Governor Newsom continuing to lead on AI so California can continue to lead the world on AI," while Google and Anthropic declined to comment4
. Multiple AI and tech policy sources suggest the executive order may lack strong legal teeth, but will influence company policies because all major AI companies want to do business with California4
. States have already passed more than 100 laws addressing AI concerns, from shielding children from chatbots to blocking copyright-protected material pilfering5
. California legislators are also advancing additional AI bills, including a sweeping chatbot bill for protecting minors4
.
Source: CNET
Summarized by
Navi
1
Policy and Regulation

2
Policy and Regulation

3
Technology
