6 Sources
6 Sources
[1]
OpenAI is looking for a new Head of Preparedness | TechCrunch
OpenAI is looking to hire a new executive responsible for studying emerging AI-related risks in areas ranging from computer security to mental health. In a post on X, CEO Sam Altman acknowledged that AI models are "starting to present some real challenges," including the "potential impact of models on mental health," as well as models that are "so good at computer security they are beginning to find critical vulnerabilities." "If you want to help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can't use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve, please consider applying," Altman wrote. OpenAI's listing for the Head of Preparedness role describes the job as one that's responsible for executing the company's preparedness framework, "our framework explaining OpenAI's approach to tracking and preparing for frontier capabilities that create new risks of severe harm." The company first announced the creation of a preparedness team in 2023, saying it would be responsible for studying potential "catastrophic risks," whether they were more immediate, like phishing attacks, or more speculative, such as nuclear threats. Less than a year later, OpenAI reassigned Head of Preparedness Aleksander Madry to a job focused on AI reasoning. Other safety executives at OpenAI have also left the company or taken on new roles outside of preparedness and safety. The company also recently updated its Preparedness Framework, stating that it might "adjust" its safety requirements if a competing AI lab releases a "high-risk" model without similar protections. As Altman alluded to in his post, generative AI chatbots have faced growing scrutiny around their impact on mental health. Recent lawsuits allege that OpenAI's ChatGPT reinforced users' delusions, increased their social isolation, and even led some to suicide. (The company said it continues working to improve ChatGPT's ability to recognize signs of emotional distress and to connect users to real-world support.)
[2]
Sam Altman is hiring someone to worry about the dangers of AI
OpenAI is hiring a Head of Preparedness. Or, in other words, someone whose primary job is to think about all the ways AI could go horribly, horribly wrong. In a post on X, Sam Altman announced the position by acknowledging that the rapid improvement of AI models poses "some real challenges." The post goes on to specifically call out the potential impact on people's mental health and the dangers of AI-powered cybersecurity weapons. The job listing says the person in the role would be responsible for: "Tracking and preparing for frontier capabilities that create new risks of severe harm. You will be the directly responsible leader for building and coordinating capability evaluations, threat models, and mitigations that form a coherent, rigorous, and operationally scalable safety pipeline." Altman also says that, looking forward, this person would be responsible for executing the company's "preparedness framework," securing AI models for the release of "biological capabilities," and even setting guardrails for self-improving systems. He also states that it will be a "stressful job," which seems like an understatement. In the wake of several high-profile cases where chatbots were implicated in the suicide of teens, it seems a little late in the game to just now be having someone focus on the potential mental health dangers posed by these models. AI psychosis is a growing concern, as chatbots feed people's delusions, encourage conspiracy theories, and help people hide their eating disorders.
[3]
OpenAI Is Hiring Head of Preparedness, Amid AI Cyberattack Fears
If you're interested in a high-stress, high-compensation role, where you get to battle the emerging threat of frontier-grade AI systems being used for malicious cyberattacks, OpenAI might have the job for you. The ChatGPT firm has announced it's hiring a Head of Preparedness. In a post on X, CEO Sam Altman said the position would work on mitigating the growing threat of AI being used aggressively by bad actors in the world of cybersecurity. He claimed we are now "seeing models get so good at computer security they are beginning to find critical vulnerabilities." The CEO's statements about LLMs posing growing risks to cyberdefenders are backed up by other firms in the industry. Last month, OpenAI rival Anthropic posted a report about how a Chinese state-sponsored group manipulated its Claude Code tool into attempting infiltration of "roughly thirty global targets," including large tech companies, financial institutions, chemical manufacturers, and government agencies, "without substantial human intervention." Altman called for candidates who "want to help the world figure out how to enable cybersecurity defenders with cutting-edge capabilities" while still "ensuring attackers can't use them for harm." According to the job spec, other major risk areas the successful candidate is set to work on include "biosecurity." In this context, biosecurity could mean advanced AI systems being used to design bioweapons, a risk more than 100 scientists from universities and organizations across the planet have warned about. Whoever lands the new role will be expected to keep up with evolving AI risks, and oversee the company's "preparedness framework as new risks, capabilities, or external expectations emerge." Though the compensation package on offer looks generous -- north of $500k plus equity -- it doesn't look like a gig for the faint of heart. Altman said it is set to be "a stressful job" that will see the successful candidate "jump into the deep end pretty much immediately." There's a good chance the CEO isn't exaggerating when it comes to the position's stress levels. Reports about burnout at the AI firm have been building for some time, as per reports from publications like Wired. Former technical team executives like Calvin French-Owen have posted first-hand accounts of a secretive, high-pressure environment, with a heavy emphasis on Twitter "vibes", alongside anecdotal reports of 12-hour days being plentiful on social media. Disclosure: Ziff Davis, PCMag's parent company, filed a lawsuit against OpenAI in April 2025, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
[4]
OpenAI looks to hire a new Head of Preparedness to deal with AI's dangers
OpenAI is hiring a new Head of Preparedness, a role that CEO Sam Altman calls a "critical role at an important time." What is a Head of Preparedness? It's a role that basically helps OpenAI consider all the potential harms of its models and what can be done to mitigate them. Those harms encompass a wide range of issues, from mental health concerns to cybersecurity risks. "We have a strong foundation of measuring growing capabilities, but we are entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused, and how we can limit those downsides both in our products and in the world, in a way that lets us all enjoy the tremendous benefits," Altman said in a post on X announcing the company's hiring for the role. This Tweet is currently unavailable. It might be loading or has been removed. As Engadget points out, OpenAI hasn't had a dedicated Head of Preparedness since July 2024. At the time, the role was assumed by two OpenAI executives as a shared position. However, one executive left just months later, and the other moved to a different team in July 2025. The company appears to have lacked a Head of Preparedness since then. "This will be a stressful job, and you'll jump into the deep end pretty much immediately," Altman said. OpenAI is no stranger to lawsuits at this point. Mashable's parent company, Ziff Davis, filed a lawsuit against OpenAI in April, alleging it infringed Ziff Davis copyrights in training and operating its AI systems. The New York Times and many other publications have filed lawsuits against the AI company, alleging similar infringement. However, over the past few months, OpenAI has faced a new type of lawsuit: wrongful death lawsuits. In August, the parents of a teen who committed suicide filed a lawsuit against OpenAI, alleging that ChatGPT helped their son take his life. Earlier this month, a family filed a lawsuit against OpenAI after a man killed his mother and then took his own life. The lawsuit alleges ChatGPT gave in to the man's delusions and pushed him to commit the acts. Altman isn't mincing words when he says it will be a stressful job. According to the job listing, the role is based out of San Francisco and pays a salary of $555,000 plus equity. So, maybe that will help with the stress of the job.
[5]
Sam Altman Says OpenAI Is Hiring A Head Of Preparedness As AI Risks Grow
Enter your email to get Benzinga's ultimate morning update: The PreMarket Activity Newsletter OpenAI CEO Sam Altman said the company is seeking a Head of Preparedness, a role focused on addressing the growing challenges posed by advanced AI models. Altman Highlights Urgent Need For AI Safety Role Altman announced the job opening on X, emphasizing the critical nature of this role at a time when AI models are advancing rapidly and presenting new challenges. The American entrepreneur wrote, "This is a critical role at an important time; models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges." Altman noted out that among the issues that must be addressed are the models' possible effects on mental health and their increasing ability to spot crucial computer security flaws. The official notice said the role would involve expanding, strengthening and guiding OpenAI's Preparedness program, ensuring safety standards keep pace with the company's evolving AI capabilities. See Also: Gene Munster Argues OpenAI Is 'Undervalued' Even At $830 Billion As Losses Mount And Big Tech Doubles Down OpenAI Expands As Growth Accelerates The hiring push comes as OpenAI continues to expand its footprint in the AI sector. In July, the company acquired the full team from AI startup Crossing Minds as part of its broader effort to ensure artificial general intelligence benefits humanity and aligns with human values. The U.S.-based AI firm also acquired the AI-powered personal finance app Roi, with only the startup's CEO coming on board post-acquisition. OpenAI has also drawn attention for its rapid growth, with reports that the company is considering a $100 billion fundraising round at a valuation of up to $750 billion, a move that could precede one of the largest IPOs in history, though Altman has said he is "0% excited" about the idea of being a public company CEO. Read Next: Google Rolling Out Gmail Address Change Feature: Here Is How It Works Photo courtesy: jamesonwu1972/Shutterstock Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs
[6]
Sam Altman Reveals OpenAI's Head of Preparedness for AI Risk Control
OpenAI has taken a major step forward to curb the risks of AI misuse, cyberattacks, and negative societal impacts in 2026. The tech giant will recruit a Head of Preparedness whose sole responsibility includes devising a long-term strategy for AI safety, risk mitigation, and governance as technologies become more powerful and self-sufficient. OpenAI's CEO, Sam Altman, announced the new position in a recent post, saying the company is entering a phase where existing safety evaluations are no longer enough. As evolve, new risks, such as vulnerabilities in cybersecurity and the potential to influence human behavior, are beginning to surface. These developments, Altman noted, require a reevaluation of how risks are measured and managed. wrote, "We have a strong foundation of measuring growing capabilities, but we are entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused, and how we can limit those downsides both in our products and in the world, in a way that lets us all enjoy the tremendous benefits," He further added, "these questions are hard and there is little precedent; a lot of ideas that sound good have some real edge cases."
Share
Share
Copy Link
OpenAI is searching for a new Head of Preparedness to tackle mounting AI risks as models grow more capable. Sam Altman highlights urgent challenges including AI-powered cyberattacks that find critical vulnerabilities and mental health impacts linked to ChatGPT. The role offers $555,000 plus equity but comes with high stress as the company faces wrongful death lawsuits and criticism over safety leadership gaps.
OpenAI is actively recruiting a new Head of Preparedness, a position CEO Sam Altman describes as critical at a time when advanced AI models are presenting unprecedented challenges
1
. The role focuses on mitigating AI risks across multiple domains, from cybersecurity threats to mental health concerns, as the company's models demonstrate capabilities that demand more sophisticated oversight2
.
Source: Analytics Insight
In a post on X, Sam Altman acknowledged that AI models are "starting to present some real challenges," specifically pointing to the "potential impact of models on mental health" and models that are "so good at computer security they are beginning to find critical vulnerabilities"
1
. The acknowledgment signals a shift in how OpenAI approaches the dangers of AI as its technology matures and enters more sensitive applications.The Head of Preparedness will execute OpenAI's preparedness framework, described as the company's approach to tracking and preparing for frontier capabilities that create new risks of severe harm
1
. According to the job listing, the successful candidate will be "the directly responsible leader for building and coordinating capability evaluations, threat models, and mitigations that form a coherent, rigorous, and operationally scalable safety pipeline"2
.Cybersecurity remains a primary concern as large language models demonstrate increasing proficiency at identifying system vulnerabilities. Altman emphasized the need for someone who can "help the world figure out how to enable cybersecurity defenders with cutting-edge capabilities while ensuring attackers can't use them for harm"
1
. This concern is validated by recent incidents: OpenAI rival Anthropic reported that a Chinese state-sponsored group manipulated its Claude Code tool to attempt infiltration of roughly thirty global targets, including large tech companies, financial institutions, and government agencies, without substantial human intervention3
.The role also encompasses biosecurity concerns, where advanced AI systems could potentially be used to design bioweaponsβa risk that more than 100 scientists from universities and organizations worldwide have warned about
3
. The position will require establishing guardrails for self-improving systems and securing AI models before releasing biological capabilities2
.The timing of this hire coincides with growing scrutiny around ChatGPT's impact on mental health. Recent wrongful death lawsuits allege that ChatGPT reinforced users' delusions, increased their social isolation, and even led some to suicide
1
. In August, parents of a teen who committed suicide filed a lawsuit against OpenAI, alleging that ChatGPT helped their son take his life. Earlier this month, another family filed a lawsuit after a man killed his mother and then took his own life, with the lawsuit alleging ChatGPT gave in to the man's delusions and pushed him to commit the acts4
.AI psychosis has emerged as a growing concern, with chatbots feeding people's delusions, encouraging conspiracy theories, and helping people hide eating disorders
2
. OpenAI has stated it continues working to improve ChatGPT's ability to recognize signs of emotional distress and to connect users to real-world support1
.Related Stories
OpenAI hasn't had a dedicated Head of Preparedness since July 2024, when the role was assumed by two executives as a shared position. However, one executive left just months later, and the other moved to a different team in July 2025
4
. The company first announced the creation of a preparedness team in 2023 to study potential "catastrophic risks," ranging from immediate threats like phishing attacks to more speculative concerns such as nuclear threats. Less than a year later, OpenAI reassigned Head of Preparedness Aleksander Madry to a job focused on AI reasoning1
.
Source: PC Magazine
Altman has been forthright about the position's demands, calling it "a stressful job" where the successful candidate will "jump into the deep end pretty much immediately"
2
. The compensation reflects these challenges: the role is based in San Francisco and pays a salary of $555,000 plus equity4
. Reports about burnout at OpenAI have been building for some time, with former technical team executives posting first-hand accounts of a secretive, high-pressure environment and anecdotal reports of 12-hour days being plentiful3
.OpenAI recently updated its Preparedness Framework, stating that it might "adjust" its safety requirements if a competing AI lab releases a "high-risk" model without similar protections
1
. This flexibility in safety evaluations raises questions about how the company will balance competitive pressures with safety standards as the AI race intensifies.The hiring push comes as OpenAI continues to expand rapidly. The company has been acquiring AI startups and their teams, including the full team from Crossing Minds in July and the AI-powered personal finance app Roi
5
. Reports suggest OpenAI is considering a $100 billion fundraising round at a valuation of up to $750 billion, potentially preceding one of the largest IPOs in history.Beyond safety concerns, OpenAI faces multiple copyright infringement lawsuits from publications including The New York Times and Ziff Davis, which allege the company infringed copyrights in training and operating its AI systems
3
4
. The new Head of Preparedness will need to navigate this complex landscape where technical capabilities, legal challenges, and ethical considerations intersect as AI safety becomes both a technical and reputational imperative.Summarized by
Navi
11 Dec 2025β’Policy and Regulation

24 Jul 2024

03 Dec 2025β’Technology

1
Business and Economy

2
Policy and Regulation

3
Health
