California mandates AI safety guardrails for state contractors as Newsom defies Trump's deregulation push

6 Sources

Share

California Governor Gavin Newsom signed an executive order requiring AI companies seeking state contracts to implement strict safety and privacy protections. The move directly challenges the Trump administration's push for minimal AI regulation, positioning California as an independent force in tech oversight while addressing concerns about deepfakes, bias, and civil rights violations.

California Takes Bold Stance on AI Regulation

California Governor Gavin Newsom signed an executive order on Monday requiring AI companies that seek state contracts to implement comprehensive safety and privacy guardrails, marking a significant escalation in state-level AI regulation

1

. The California executive order directly challenges the Trump administration's efforts to keep AI regulation minimal and federally controlled, setting up a potential showdown between state and federal authority

3

. Gavin Newsom emphasized California's leadership position, stating the state will "use every tool we have to ensure companies protect people's rights, not exploit them or put them in harm's way"

1

.

Source: Benzinga

Source: Benzinga

Strict Requirements for AI Companies State Contracts

Companies vying for government contracts must now explain their policies on how they prevent technology misuse, including the distribution of child sexual abuse material and violent pornography

3

. The order mandates that AI companies demonstrate safeguards against AI misuse, detailing how their AI models avoid incorporating harmful bias and discrimination

4

. Firms must also outline policies aimed at preventing unlawful surveillance, detention, and civil rights violations

2

. Within 120 days, California's Department of General Services and Department of Technology will submit recommendations for new vendor certifications that allow firms to attest to responsible AI governance and public safety protections

4

.

Watermarking AI-Generated Content to Combat Deepfakes

The executive order addresses growing concerns about misinformation by requiring state agencies to implement watermarking AI-generated content, specifically images and videos created or manipulated through artificial intelligence

2

. This technique aims to help consumers distinguish between human-generated and AI-generated materials, directly tackling the spread of deepfakes that have raised public safety concerns

5

. State officials will develop best practices for this watermarking requirement as part of the broader effort to prevent the spread of misleading content

3

.

Source: NYT

Source: NYT

Independent Supply Chain Risk Assessment Challenges Federal Oversight

In a notable departure from federal oversight, California will conduct its own supply chain risk assessment even when the federal government designates a company as risky

2

. This provision became particularly significant following the Pentagon's recent designation of AI startup Anthropic as a supply-chain risk, which exposed tensions within the Trump administration's approach to AI for military use

2

. If California's independent assessment finds a company safe, the state may allow it to continue as a contractor despite federal restrictions

4

. This approach signals California's determination to maintain autonomy in tech policy decisions affecting consumer privacy and civil rights.

Federal-State Clash Over AI Regulation Intensifies

The California executive order emerges amid growing tension with the White House, which released a policy framework in December arguing that "United States AI companies must be free to innovate without cumbersome regulation"

3

. Donald Trump's administration maintains that requiring AI companies to comply with 50 different state laws would prevent the US from winning the global AI race

1

. Trump's December order directed the Justice Department to establish an "AI Litigation Task Force" to challenge state AI regulations

3

. Meanwhile, companies including Google, Meta, OpenAI, and Andreessen Horowitz have called for national standards rather than navigating diverse state requirements

1

. States have already passed more than 100 laws addressing AI-related concerns, from protecting children from chatbots to preventing copyright chaos for creators

3

. Critics argue the White House framework doesn't adequately address concerns about job loss, infrastructure expansion, and protection of vulnerable groups, leaving states to fill the regulatory gap.

Source: CNET

Source: CNET

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Donโ€™t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

ยฉ 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo