6 Sources
6 Sources
[1]
Newsom Signs Executive Order Requiring AI Companies to Provide Safety, Privacy Guardrails
Expertise I have more than 30 years' experience in journalism in the heart of the Silicon Valley. California Gov. Gavin Newsom on Monday signed an executive order requiring AI companies that do business with the state to institute safety and privacy guidelines. The order (PDF) is designed to ensure that companies contracting with the state adhere to rigorous standards and develop responsible policies to prevent misuse of their technology while protecting consumers' safety and privacy, according to Newsom's office. "California leads in AI, and we're going to use every tool we have to ensure companies protect people's rights, not exploit them or put them in harm's way," Newsom said in a statement. "While others in Washington are designing policy and creating contracts in the shadow of misuse, we're focused on doing this the right way." The executive order comes as the Trump administration maintains that the federal government should be responsible for regulating the AI industry -- and that requiring AI companies to comply with 50 different sets of state laws would prevent the US from "winning" the global AI race. The White House recently released a new policy framework for regulating generative AI that focuses on some of the biggest concerns people have about AI: job loss, copyright chaos for creators, rapidly expanding infrastructure such as data centers and the protection of vulnerable groups like children. But critics say it doesn't go far enough to regulate the fast-growing AI industry. Some states have passed laws making it a crime to create sexual images of people without their consent, while others have placed restrictions on insurance companies using AI to approve or deny health care claims. Companies, including Google, Meta, OpenAI and Andreessen Horowitz, have been calling for national AI standards rather than litigating across all 50 states.
[2]
What to Know About California's Executive Order on A.I.
Gov. Gavin Newsom of California on Monday issued a first-of-its-kind executive order requiring safety and privacy guardrails from artificial intelligence companies that contract with the state. California has been a leader in tech lawmaking, and was the first state to pass a law mandating safety and transparency from the biggest A.I. companies. Mr. Newsom, a Democrat, signed the order partly as a message to President Trump, who has been trying to bat down state attempts to regulate A.I. Here's what's in his executive order. Contractor Vetting Companies vying for government contracts will first have to explain their safety and privacy policies around A.I. The state will look carefully at policies on how the companies prevent exploitation of individuals, including the spread of child sexual abuse materials. The government will also consider whether A.I. models, the technology that powers chatbots and other tools, are used to monitor individuals or are used to block certain speech. Companies should also explain how they are avoiding bias in their systems. Independence From Federal Contracting Standards If the federal government designates a company a supply chain risk, which the Pentagon has recently done with the A.I. start-up Anthropic, California will independently conduct its own assessment. If the company isn't determined to be a risk, the state may allow the company to continue on as a contractor. This is significant because the Pentagon's legal tussle with Anthropic, which had provided the Defense Department with A.I. technologies for use on classified systems, has exposed a rift in the administration's pursuit of A.I. for war use. The Pentagon terminated its contract with Anthropic after the company said the government couldn't use its models for mass surveillance and the deployment of autonomous weaponry. Watermarking Requirement The governor also called on state officials to begin watermarking A.I.-generated or manipulated videos that they create. The technique is aimed at guarding against the spread of misinformation. It would also allow consumers to tell the difference between human-generated and A.I.-generated images produced by the state.
[3]
California to impose new AI regulations in defiance of Trump call
Gavin Newsom signs order to prioritize public safety and rights as president seeks to prevent 'cumbersome' rules California will impose new standards on artificial intelligence companies seeking to do business with the state, defying Donald Trump's demands to keep the controversial industry as deregulated as possible. Democratic governor Gavin Newsom signed an executive order on Monday that gives the state four months to develop AI policies that prioritize public safety. Companies hoping to sign contracts with the state of California will have to show they have policies to keep AI from distributing child sexual abuse material and violent pornography. They will also show how their models avoid incorporating "harmful bias" and detail policies aimed at avoiding "unlawful discrimination, detention, and surveillance". The order directs the state to come up with best practices for watermarking AI-generated or -manipulated images and videos. "California's always been the birthplace of innovation," Newsom wrote in a statement. "But we also understand the flip side: in the wrong hands, innovation can be misused in ways that put people at risk. "California leads in AI, and we're going to use every tool we have to ensure companies protect people's rights, not exploit them or put them in harm's way." California's changes are the latest in a series of state-level attempts to regulate an AI industry that has repeatedly raised public safety concerns and worries that the expensive tech will degrade the value of labor and kill jobs. States have passed more than 100 laws to shield children from chatbots and to block AI companies from pilfering copyright-protected material, according to the New York Times. The White House issued a national policy framework for AI in December that discouraged states from passing such regulations. "To win, United States AI companies must be free to innovate without cumbersome regulation," Trump's executive order announcing the framework reads. "But excessive state regulation thwarts this imperative." Trump's order directed the justice department in December to establish an "AI Litigation Task Force" to challenge state AI regulations.
[4]
California AI order requires firms seeking state contracts to have safeguards against abuse - The Economic Times
California Governor Gavin Newsom signed an executive order on Monday that requires firms seeking contracts with the state to provide safeguards against AI misuse, including the generation of illegal content, harmful bias and violations of civil rights. โ To avoid โ misinformation, the order requires agencies towatermark images or videos that may be generated through AI asper the guidance issued by the state. If the federal government labels a company as a supplychain risk, California will conduct its own assessment and โ mayallow it to remain a contractor if it does not find it to be arisk. This โ order comes after the Pentagon slapped a formalsupply-chain risk designation on artificial intelligence labAnthropic, barring government contractors from using the firm'stechnology in their work for the US military. Within 120 days, California's Department of GeneralServices and Department of Technology will submitrecommendations for new AIโrelated vendor certifications whichwould allow firms to attest to responsible AI governance andpublicโsafety protections. The order underlines the โ state's efforts to strike anindependent stance despite efforts by some Republican lawmakersfor it to defer to federal authorities on law and regulation. In February, California Attorney General Rob Bonta toldReuters in an interview his office is developing its in-houseexpertise through its "AI oversight, accountability andregulation program."
[5]
California Governor Gavin Newsom Cracks Down On AI Misuse With New Order Mandating Safeguards: 'While Tru
On Monday, California tightened oversight of AI, with Gov. Gavin Newsom (D-Calif.) rolling out sweeping new rules aimed at curbing misuse and protecting civil liberties. New AI Rules Target Misuse, Bias, Deepfakes Newsom signed an executive order requiring companies seeking contracts with the state of California to implement safeguards against AI misuse, including the creation of illegal content, harmful bias and civil rights violations. The order also directs state agencies to clearly label AI-generated images and videos with watermarks, a move designed to limit misinformation and the spread of deepfakes. California Takes Independent Stance On Federal AI Risk Labels This development follows the move by the Pentagon to designate Claude-parent Anthropic as a supply-chain risk, effectively barring contractors from using its technology in military-related work. Last week, the company secured a legal victory after U.S. District Judge Rita Lin issued a preliminary injunction preventing the Pentagon from limiting the use of its AI models. The court said the government's classification of Anthropic as a "supply chain risk" was likely unlawful and may have been retaliatory, temporarily pausing the directive linked to the Donald Trump administration. California, however, signaled it may not automatically follow that decision. New Certification Framework For AI Vendors Coming Soon Within 120 days, the state's Department of General Services and Department of Technology are expected to propose a certification framework requiring vendors to attest to responsible AI governance and public safety protections. The move is part of a broader effort to formalize accountability standards for AI firms working with the state. Newsom Draws Contrast With Trump On AI Policy In a statement on X, Newsom highlighted California's focus on privacy and safety. "I just signed an executive order to actively make sure AI companies working with the state protect privacy and civil liberties," he said. "While Trump pressures companies to deploy AI for autonomous weapons and domestic surveillance, California is using our power ... to raise the bar on privacy and security." Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Photo courtesy: Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[6]
California AI order requires firms seeking state contracts to have safeguards against abuse
March 30 (Reuters) - California Governor Gavin Newsom signed an executive order on Monday that requires firms seeking contracts with the state to provide safeguards against AI misuse, including the generation of illegal content, harmful bias and violations of civil rights. * To avoid misinformation, the order requires agencies towatermark images or videos that may be generated through AI asper the guidance issued by the state. * If the federal government labels a company as a supplychain risk, California will conduct its own assessment and mayallow it to remain a contractor if it does not find it to be arisk. * This order comes after the Pentagon slapped a formalsupply-chain risk designation on artificial intelligence labAnthropic, barring government contractors from using the firm'stechnology in their work for the U.S. military. * Within 120 days, California's Department of GeneralServices and Department of Technology will submitrecommendations for new AI-related vendor certifications whichwould allow firms to attest to responsible AI governance andpublic-safety protections. * The order underlines the state's efforts to strike anindependent stance despite efforts by some Republican lawmakersfor it to defer to federal authorities on law and regulation. * In February, California Attorney General Rob Bonta toldReuters in an interview his office is developing its in-houseexpertise through its "AI oversight, accountability andregulation program." (Reporting by Carlos Mรฉndez in Mexico City; Editing by Edwina Gibbs)
Share
Share
Copy Link
California Governor Gavin Newsom signed an executive order requiring AI companies seeking state contracts to implement strict safety and privacy protections. The move directly challenges the Trump administration's push for minimal AI regulation, positioning California as an independent force in tech oversight while addressing concerns about deepfakes, bias, and civil rights violations.
California Governor Gavin Newsom signed an executive order on Monday requiring AI companies that seek state contracts to implement comprehensive safety and privacy guardrails, marking a significant escalation in state-level AI regulation
1
. The California executive order directly challenges the Trump administration's efforts to keep AI regulation minimal and federally controlled, setting up a potential showdown between state and federal authority3
. Gavin Newsom emphasized California's leadership position, stating the state will "use every tool we have to ensure companies protect people's rights, not exploit them or put them in harm's way"1
.
Source: Benzinga
Companies vying for government contracts must now explain their policies on how they prevent technology misuse, including the distribution of child sexual abuse material and violent pornography
3
. The order mandates that AI companies demonstrate safeguards against AI misuse, detailing how their AI models avoid incorporating harmful bias and discrimination4
. Firms must also outline policies aimed at preventing unlawful surveillance, detention, and civil rights violations2
. Within 120 days, California's Department of General Services and Department of Technology will submit recommendations for new vendor certifications that allow firms to attest to responsible AI governance and public safety protections4
.The executive order addresses growing concerns about misinformation by requiring state agencies to implement watermarking AI-generated content, specifically images and videos created or manipulated through artificial intelligence
2
. This technique aims to help consumers distinguish between human-generated and AI-generated materials, directly tackling the spread of deepfakes that have raised public safety concerns5
. State officials will develop best practices for this watermarking requirement as part of the broader effort to prevent the spread of misleading content3
.
Source: NYT
Related Stories
In a notable departure from federal oversight, California will conduct its own supply chain risk assessment even when the federal government designates a company as risky
2
. This provision became particularly significant following the Pentagon's recent designation of AI startup Anthropic as a supply-chain risk, which exposed tensions within the Trump administration's approach to AI for military use2
. If California's independent assessment finds a company safe, the state may allow it to continue as a contractor despite federal restrictions4
. This approach signals California's determination to maintain autonomy in tech policy decisions affecting consumer privacy and civil rights.The California executive order emerges amid growing tension with the White House, which released a policy framework in December arguing that "United States AI companies must be free to innovate without cumbersome regulation"
3
. Donald Trump's administration maintains that requiring AI companies to comply with 50 different state laws would prevent the US from winning the global AI race1
. Trump's December order directed the Justice Department to establish an "AI Litigation Task Force" to challenge state AI regulations3
. Meanwhile, companies including Google, Meta, OpenAI, and Andreessen Horowitz have called for national standards rather than navigating diverse state requirements1
. States have already passed more than 100 laws addressing AI-related concerns, from protecting children from chatbots to preventing copyright chaos for creators3
. Critics argue the White House framework doesn't adequately address concerns about job loss, infrastructure expansion, and protection of vulnerable groups, leaving states to fill the regulatory gap.
Source: CNET
Summarized by
Navi
26 Sept 2025โขPolicy and Regulation

09 Sept 2025โขPolicy and Regulation

19 Sept 2024

1
Technology

2
Technology

3
Science and Research
