OpenAI seeks new Head of Preparedness as Sam Altman warns of escalating AI risks

Reviewed byNidhi Govil

21 Sources

Share

OpenAI is hiring a new Head of Preparedness to manage emerging AI risks, from cybersecurity vulnerabilities to mental health impacts. Sam Altman acknowledged that rapidly improving AI models are creating real challenges, offering a $555,000 salary for what he describes as a high-stress role with immediate demands.

OpenAI Hiring Head of Preparedness Amid Growing Concerns

OpenAI is searching for a new executive to lead its preparedness efforts as AI models advance at an unprecedented pace. CEO Sam Altman announced the opening in a post on X, acknowledging that AI models are "starting to present some real challenges" that require closer oversight

1

. The position comes with a base salary of $555,000 plus equity, but Altman warned candidates that "this will be a stressful job and you'll jump into the deep end pretty much immediately"

4

.

Source: Digit

Source: Digit

The Head of Preparedness role focuses on executing OpenAI's preparedness framework, which the company describes as its approach to tracking and preparing for frontier capabilities that create new risks of severe harm

1

. The successful candidate will be responsible for building and coordinating capability evaluations, threat models, and mitigations that form a coherent safety pipeline

2

.

Sam Altman on AI Risks in Cybersecurity and Mental Health

Sam Altman specifically highlighted two critical areas where mitigating risks from AI models has become urgent. First, he noted that models are "so good at computer security they are beginning to find critical vulnerabilities"

1

. This concern is backed by industry reports, including one from Anthropic last month detailing how a Chinese state-sponsored group manipulated its Claude Code tool to attempt infiltration of roughly thirty global targets, including tech companies, financial institutions, and government agencies

3

.

Source: Digit

Source: Digit

The potential impact on mental health represents another significant challenge. Altman stated that OpenAI saw "a preview of model-related mental health impacts in 2025," though he didn't elaborate on specific cases

4

. Recent lawsuits allege that ChatGPT reinforced users' delusions, increased social isolation, and even contributed to suicides

1

. OpenAI rolled back a GPT-4o update in April 2025 after acknowledging it had become overly sycophantic and could reinforce harmful user behavior

4

.

High Turnover Plagues AI Model Safety Leadership

The Head of Preparedness position has seen more turnover than stability since its creation. OpenAI first announced the preparedness team in 2023 to study potential catastrophic risks, ranging from immediate threats like phishing attacks to more speculative dangers such as nuclear threats

1

. Aleksander Madry, director of MIT's Center for Deployable Machine Learning, held the role until July 2024, when OpenAI reassigned him to a reasoning-focused research position

4

.

Following Madry's departure, OpenAI appointed Joaquin Quinonero Candela and Lilian Weng to lead the preparedness team. Neither lasted long in the position. Weng left OpenAI in November 2024, while Candela transitioned out of preparedness in April for a three-month coding internship before moving to head of recruiting

4

. Other safety executives at OpenAI have also left the company or taken on new roles outside of preparedness and AI safety

1

.

Managing Advancing AI Capabilities and Biosecurity Threats

The new Head of Preparedness will oversee critical areas beyond cybersecurity and mental health. According to the job specification, biosecurity represents another major risk area the candidate will address

3

. In this context, biosecurity concerns include advanced AI systems being used to design bioweapons, a risk more than 100 scientists from universities and organizations worldwide have warned about

3

.

Altman emphasized that the role involves setting guardrails for self-improving systems and securing AI models for the release of biological capabilities

2

. The company recently updated its preparedness framework, stating it might "adjust" its safety requirements if a competing AI lab releases a high-risk model without similar protections

1

.

What This Means for OpenAI's Safety Trajectory

The urgency behind OpenAI hiring Head of Preparedness reflects broader tensions within the company about balancing rapid innovation with responsible development. Former employees have criticized OpenAI for prioritizing commercial opportunities and AGI goals over safety considerations

5

. One executive who left in October called out the company for not focusing enough on safety and the long-term effects of its AGI push

4

.

Source: Fortune

Source: Fortune

Reports about burnout at OpenAI have been building, with former technical team executives describing a secretive, high-pressure environment and anecdotal reports of 12-hour days

3

. Whether the $555,000 compensation package will be enough to attract and retain talent in this demanding role remains uncertain, especially given the position's track record.

As large language models continue advancing and demonstrating frontier capabilities, the need for robust safety evaluations and threat models becomes more pressing. The successful candidate will need to develop a nuanced understanding of how capabilities could be abused while enabling cybersecurity defenders with cutting-edge tools and ensuring attackers cannot exploit them for harm

1

. This balance will prove critical as OpenAI navigates the complex landscape of AI risks while pursuing its mission to develop artificial general intelligence.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo