OpenAI faces California AI safety law violations over high-risk GPT-5.3-Codex release

2 Sources

Share

OpenAI is under fire from watchdog group The Midas Project for allegedly violating California's new AI safety law with its GPT-5.3-Codex coding model release. The company's first model to hit "high" cybersecurity risk failed to implement legally required safeguards, raising questions about compliance with the state's SB 53 regulations that came into effect in January.

OpenAI Under Scrutiny for High-Risk Model Release

OpenAI faces potential legal consequences after The Midas Project alleged the company violated California's SB 53 AI safety law with its GPT-5.3-Codex release last week. The controversy centers on whether OpenAI implemented required safeguards for what CEO Sam Altman acknowledged as the company's first model to reach "high" cybersecurity risk on its internal Preparedness Framework

1

. The coding model, designed to help OpenAI reclaim its lead in AI-powered coding, demonstrates markedly higher performance on coding tasks than earlier versions and competitors like Anthropic, according to benchmark data released by the company

1

.

Source: Benzinga

Source: Benzinga

California's SB 53 Sets New Compliance Standards

California's SB 53, which took effect in January, requires major AI companies to publish and adhere to their own safety frameworks detailing how they'll prevent catastrophic risks—defined as incidents causing more than 50 deaths or $1 billion in property damage

1

. The law also prohibits misleading statements about compliance. OpenAI's OpenAI safety framework mandates special safeguards for high-risk models to prevent AI from acting deceptively, sabotaging safety research, or hiding its true capabilities. However, the company did not implement these required safeguards before launching GPT-5.3-Codex, triggering allegations of high-risk violations from watchdog groups

1

.

OpenAI Defends Its Interpretation of Safety Rules

OpenAI maintains confidence in its compliance with frontier safety laws, arguing the framework's wording is "ambiguous." In a safety system card accompanying the model, the company stated that safeguards are only necessary when high cyber risk occurs "in conjunction with" long-range autonomy—the ability to operate independently over extended periods

1

. An OpenAI spokesperson told Fortune that "GPT-5.3-Codex completed our full testing and governance process, as detailed in the publicly released system card, and did not demonstrate long-range autonomy capabilities based on proxy evaluations and confirmed by internal expert judgments, including from our Safety Advisory Group"

1

. The company plans to clarify language in its current framework rather than change internal practices.

Safety Researchers Challenge OpenAI's Position

Safety researchers have disputed OpenAI's interpretation of the regulations. Nathan Calvin, Vice President of State Affairs and General Counsel at Encode, stated in a post on X: "Rather than admit they didn't follow their plan or update it before the release, it looks like OpenAI is saying that the criteria was ambiguous. From reading the relevant docs...it doesn't look ambiguous to me"

1

. The Midas Project argues that OpenAI cannot definitively prove the model lacks the autonomy required for extra measures, noting that the company's previous, less advanced model already topped global benchmarks for autonomous task completion

1

.

Potential Penalties and Market Implications

Tyler Johnston, founder of The Midas Project, called the potential violation "especially embarrassing given how low the floor SB 53 sets is: basically just adopt a voluntary safety plan of your choice and communicate honestly about it, changing it as needed, but not violating or lying about it"

2

. If an investigation confirms the allegations, SB 53 allows for substantial penalties potentially running into millions of dollars depending on the severity and duration of non-compliance

1

. The California Attorney General's Office stated it is "committed to enforcing the laws of our state, including those enacted to increase transparency and safety in the emerging AI space," though it declined to comment on potential investigations

1

.

OpenAI's Growth Amid Competitive Pressure

Despite the controversy, OpenAI's momentum continues. Sam Altman reassured employees and investors in an internal Slack message that ChatGPT had returned to more than 10% monthly growth

2

. He noted that Codex usage rose approximately 50% following the GPT-5.3-Codex launch and a standalone Mac app release

2

. The situation highlights the tension between rapid AI development and evolving regulatory frameworks, with industry observers watching closely to see how California's approach to AI safety enforcement will shape future model releases across the sector.

Source: Fortune

Source: Fortune

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo