21 Sources
21 Sources
[1]
OpenAI is looking for a new Head of Preparedness | TechCrunch
OpenAI is looking to hire a new executive responsible for studying emerging AI-related risks in areas ranging from computer security to mental health. In a post on X, CEO Sam Altman acknowledged that AI models are "starting to present some real challenges," including the "potential impact of models on mental health," as well as models that are "so good at computer security they are beginning to find critical vulnerabilities." "If you want to help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can't use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve, please consider applying," Altman wrote. OpenAI's listing for the Head of Preparedness role describes the job as one that's responsible for executing the company's preparedness framework, "our framework explaining OpenAI's approach to tracking and preparing for frontier capabilities that create new risks of severe harm." The company first announced the creation of a preparedness team in 2023, saying it would be responsible for studying potential "catastrophic risks," whether they were more immediate, like phishing attacks, or more speculative, such as nuclear threats. Less than a year later, OpenAI reassigned Head of Preparedness Aleksander Madry to a job focused on AI reasoning. Other safety executives at OpenAI have also left the company or taken on new roles outside of preparedness and safety. The company also recently updated its Preparedness Framework, stating that it might "adjust" its safety requirements if a competing AI lab releases a "high-risk" model without similar protections. As Altman alluded to in his post, generative AI chatbots have faced growing scrutiny around their impact on mental health. Recent lawsuits allege that OpenAI's ChatGPT reinforced users' delusions, increased their social isolation, and even led some to suicide. (The company said it continues working to improve ChatGPT's ability to recognize signs of emotional distress and to connect users to real-world support.)
[2]
Sam Altman is hiring someone to worry about the dangers of AI
OpenAI is hiring a Head of Preparedness. Or, in other words, someone whose primary job is to think about all the ways AI could go horribly, horribly wrong. In a post on X, Sam Altman announced the position by acknowledging that the rapid improvement of AI models poses "some real challenges." The post goes on to specifically call out the potential impact on people's mental health and the dangers of AI-powered cybersecurity weapons. The job listing says the person in the role would be responsible for: "Tracking and preparing for frontier capabilities that create new risks of severe harm. You will be the directly responsible leader for building and coordinating capability evaluations, threat models, and mitigations that form a coherent, rigorous, and operationally scalable safety pipeline." Altman also says that, looking forward, this person would be responsible for executing the company's "preparedness framework," securing AI models for the release of "biological capabilities," and even setting guardrails for self-improving systems. He also states that it will be a "stressful job," which seems like an understatement. In the wake of several high-profile cases where chatbots were implicated in the suicide of teens, it seems a little late in the game to just now be having someone focus on the potential mental health dangers posed by these models. AI psychosis is a growing concern, as chatbots feed people's delusions, encourage conspiracy theories, and help people hide their eating disorders.
[3]
OpenAI Is Hiring Head of Preparedness, Amid AI Cyberattack Fears
If you're interested in a high-stress, high-compensation role, where you get to battle the emerging threat of frontier-grade AI systems being used for malicious cyberattacks, OpenAI might have the job for you. The ChatGPT firm has announced it's hiring a Head of Preparedness. In a post on X, CEO Sam Altman said the position would work on mitigating the growing threat of AI being used aggressively by bad actors in the world of cybersecurity. He claimed we are now "seeing models get so good at computer security they are beginning to find critical vulnerabilities." The CEO's statements about LLMs posing growing risks to cyberdefenders are backed up by other firms in the industry. Last month, OpenAI rival Anthropic posted a report about how a Chinese state-sponsored group manipulated its Claude Code tool into attempting infiltration of "roughly thirty global targets," including large tech companies, financial institutions, chemical manufacturers, and government agencies, "without substantial human intervention." Altman called for candidates who "want to help the world figure out how to enable cybersecurity defenders with cutting-edge capabilities" while still "ensuring attackers can't use them for harm." According to the job spec, other major risk areas the successful candidate is set to work on include "biosecurity." In this context, biosecurity could mean advanced AI systems being used to design bioweapons, a risk more than 100 scientists from universities and organizations across the planet have warned about. Whoever lands the new role will be expected to keep up with evolving AI risks, and oversee the company's "preparedness framework as new risks, capabilities, or external expectations emerge." Though the compensation package on offer looks generous -- north of $500k plus equity -- it doesn't look like a gig for the faint of heart. Altman said it is set to be "a stressful job" that will see the successful candidate "jump into the deep end pretty much immediately." There's a good chance the CEO isn't exaggerating when it comes to the position's stress levels. Reports about burnout at the AI firm have been building for some time, as per reports from publications like Wired. Former technical team executives like Calvin French-Owen have posted first-hand accounts of a secretive, high-pressure environment, with a heavy emphasis on Twitter "vibes", alongside anecdotal reports of 12-hour days being plentiful on social media. Disclosure: Ziff Davis, PCMag's parent company, filed a lawsuit against OpenAI in April 2025, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
[4]
OpenAI seeks new safety chief as Altman flags growing risks
There's a big salary up for grabs if you can handle a high-stress role with a track record of turnover How'd you like to earn more than half a million dollars working for one of the world's fastest-growing tech companies? The catch: the job is stressful, and the last few people tasked with it didn't stick around. Over the weekend, OpenAI boss Sam Altman went public with a search for a new Head of Preparedness, saying rapidly improving AI models are creating new risks that need closer oversight. Altman flagged an opening for the company's Head of Preparedness on Saturday in a post on X. Describing the role, which carries a $555,000 base salary plus equity, as one focused on securing OpenAI's systems and understanding how they could be abused, Altman also noted that AI models are beginning to present "some real challenges" as they rapidly improve and gain new capabilities. "The potential impact of models on mental health was something we saw a preview of in 2025," Altman said, without elaborating on specific cases or products. AI has been flagged as an increasingly common trigger of psychological troubles in both juveniles and adults, with chatbots reportedly linked to multiple deaths in the past year. OpenAI, one of the most popular chatbot makers in the market, rolled back a GPT-4o update in April 2025 after acknowledging it had become overly sycophantic and could reinforce harmful or destabilizing user behavior. Despite that, OpenAI released ChatGPT-5.1 last month, which included a number of emotional dependence-nurturing features, like the inclusion of emotionally-suggestive language, "warmer, more intelligent" responses, and the like. Sure, it might be less sycophantic, but it'll speak to you with more intimacy than ever before, making it feel more like a human companion instead of the impersonal, logical ship computer from Star Trek that spits facts with little regard for feeling. It's no wonder the company needs someone to steer the ship with regard to model safety. "We have a strong foundation of measuring growing capabilities," Altman said, "but we are entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused." According to the job posting, the Head of Preparedness will be responsible for leading technical strategy and execution of OpenAI's preparedness framework [PDF], which the company describes as its approach "to tracking and preparing for frontier capabilities that create new risks of severe harm." It's not a new role, mind you, but it's one that's seen more turnover than the Defense Against Dark Arts faculty position at Hogwarts. Aleksander Madry, director of MIT's Center for Deployable Machine Learning and faculty leader at the Institute's AI Policy Forum, occupied the Preparedness role until July 2024, when OpenAI reassigned him to a reasoning-focused research role. This, mind you, came in the wake of a number of high-profile safety leadership exits at the company and a partial reset of OpenAI's safety team structure. In Madry's place, OpenAI appointed Joaquin Quinonero Candela and Lilian Weng to lead the preparedness team. Both occupied other roles at OpenAI prior to heading up preparedness, but neither lasted long in the position. Weng left OpenAI in November 2024, while Candela left his role as head of preparedness in April for a three-month coding internship at OpenAI. While still an OpenAI employee, he's out of the technical space entirely and is now serving as head of recruiting. "This will be a stressful job and you'll jump into the deep end pretty much immediately," Altman said of the open position. Understandably so - OpenAI and model safety have long had a contentious relationship, as numerous ex-employees have attested. One executive who left the company in October called the Altman outfit out for not being as focused on safety and the long-term effects of its AGI push as it should be, suggesting that the company was pushing ahead in its goal to dominate the industry at the expense of the rest of society. Will $555,000 be enough to keep a new Preparedness chief in the role? Skepticism may be warranted. OpenAI didn't respond to questions for this story. ®
[5]
Sam Altman wants to pay some lucky person $555,000 a year to look after OpenAI's AI - but obviously, all is not quite as it seems
Sam Altman has shared that OpenAI is looking for a new Head of Preparedness, citing its rapidly improving AI tools, models and emerging risks. The ChatGPT maker is offering a salary of $555,000 plus equity to one lucky individual, however CEO Altman has warned the "Head of Preparedness" job is set to be a high-stress role. Key to the position is understanding how advanced models could be abused, steering safety decisions, and securing OpenAI's systems to mitigate risks. Altman said in his post that OpenAI saw a "preview" of model-related mental health impacts in 2025, seemingly acknowledging a number of deaths related to ChatGPT interactions. OpenAI had already rolled back a GPT-4o update after admitting it could reinforce harmful user behavior. "We are entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused, and how we can limit those downsides both in our products and in the world, in a way that lets us all enjoy the tremendous benefits," he wrote. The soon-to-be-appointed head will lead a "small [but] high-impact" team, adhering to OpenAI's Preparedness framework. Aleksander Madry previously held the role, but he was reassigned by OpenAI. Later, Joaquin Quiñonero Candela occupied the role, but Candela is now Head of Recruiting. Lilian Weng also spent some time as Head of Preparedness. "This will be a stressful job and you'll jump into the deep end pretty much immediately," Altman warned. The company has previously been criticized by former employees for prioritizing commercial opportunities and AGI goals, but Altman's latest push for a Head of Preparedness combined with a healthy salary package could mark an important shift.
[6]
Sam Altman launches job search to fill 'critical role' to protect against AI's harms
New head of preparedness at OpenAI will face daunting in-tray amid fears from some experts that AI could 'turn on us' The maker of ChatGPT has advertised a $555,000-a-year vacancy with a daunting job description that would cause Superman to take a sharp intake of breath. In what may be close to the impossible job, the "head of preparedness" at OpenAI will be directly responsible for defending against risks from ever more powerful AIs to human mental health, cybersecurity and biological weapons. That is before the successful candidate has to start worrying about the possibility that AIs may soon begin training themselves amid fears from some experts they could "turn against us". "This will be a stressful job, and you'll jump into the deep end pretty much immediately," said Sam Altman, the chief executive of the San Francisco-based organisation, as he launched the hunt to fill "a critical role" to "help the world". The successful candidate will be responsible for evaluating and mitigating emerging threats and "tracking and preparing for frontier capabilities that create new risks of severe harm". Some previous executives in the post have only lasted for short periods. The opening comes against a backbeat of warnings from inside the AI industry about the risks of the increasingly capable technology. On Monday, Mustafa Suleyman, the chief executive of Microsoft AI, told BBC Radio 4's Today programme: "I honestly think that if you're not a little bit afraid at this moment, then you're not paying attention." Demis Hassabis, the Nobel prize-winning co-founder of Google DeepMind, this month warned of risks that included AIs going "off the rails in some way that harms humanity". Amid resistance from Donald Trump's White House, there is little regulation of AI at national or international level. Yoshua Bengio, a computer scientist known as one of the "godfathers of AI", recently said: "A sandwich has more regulation than AI." The result is that AI companies are largely regulating themselves. Altman said on X as he launched the job search: "We have a strong foundation of measuring growing capabilities, but we are entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused, and how we can limit those downsides both in our products and in the world, in a way that lets us all enjoy the tremendous benefits. These questions are hard and there is little precedent." One user responded sardonically: "Sounds pretty chill, is there vacation included?" What is included is an unspecified slice of equity in OpenAI, a company that has been valued at $500bn. Last month, the rival company Anthropic reported the first AI-enabled cyber-attacks in which artificial intelligence acted largely autonomously under the supervision of suspected Chinese state actors to successfully hack and access targets' internal data. This month OpenAI said its latest model was almost three times better at hacking than three months earlier and said "we expect that upcoming AI models will continue on this trajectory". OpenAI is also defending a lawsuit from the family of Adam Raine, a 16-year old from California who killed himself after alleged encouragement from ChatGPT. It has argued Raine misused the technology. Another case, filed this month, claims ChatGPT encouraged the paranoid delusions of a 56-year old in Connecticut, Stein-Erik Soelberg, who then murdered his 83-year old mother and killed himself. An OpenAI spokesperson said it was reviewing the filings in the Soelberg case, which it described as "incredibly heartbreaking" and that it was improving ChatGPT's training "to recognise and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support".
[7]
OpenAI looks to hire a new Head of Preparedness to deal with AI's dangers
OpenAI is hiring a new Head of Preparedness, a role that CEO Sam Altman calls a "critical role at an important time." What is a Head of Preparedness? It's a role that basically helps OpenAI consider all the potential harms of its models and what can be done to mitigate them. Those harms encompass a wide range of issues, from mental health concerns to cybersecurity risks. "We have a strong foundation of measuring growing capabilities, but we are entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused, and how we can limit those downsides both in our products and in the world, in a way that lets us all enjoy the tremendous benefits," Altman said in a post on X announcing the company's hiring for the role. This Tweet is currently unavailable. It might be loading or has been removed. As Engadget points out, OpenAI hasn't had a dedicated Head of Preparedness since July 2024. At the time, the role was assumed by two OpenAI executives as a shared position. However, one executive left just months later, and the other moved to a different team in July 2025. The company appears to have lacked a Head of Preparedness since then. "This will be a stressful job, and you'll jump into the deep end pretty much immediately," Altman said. OpenAI is no stranger to lawsuits at this point. Mashable's parent company, Ziff Davis, filed a lawsuit against OpenAI in April, alleging it infringed Ziff Davis copyrights in training and operating its AI systems. The New York Times and many other publications have filed lawsuits against the AI company, alleging similar infringement. However, over the past few months, OpenAI has faced a new type of lawsuit: wrongful death lawsuits. In August, the parents of a teen who committed suicide filed a lawsuit against OpenAI, alleging that ChatGPT helped their son take his life. Earlier this month, a family filed a lawsuit against OpenAI after a man killed his mother and then took his own life. The lawsuit alleges ChatGPT gave in to the man's delusions and pushed him to commit the acts. Altman isn't mincing words when he says it will be a stressful job. According to the job listing, the role is based out of San Francisco and pays a salary of $555,000 plus equity. So, maybe that will help with the stress of the job.
[8]
OpenAI wants to hire someone to handle ChatGPT risks that can't be predicted
The role focuses on predicting, testing, and reducing real-world AI harms OpenAI is betting big on a role designed to stop AI risks before they spiral. The company has posted a new senior role called Head of Preparedness, a position focused on identifying and reducing the most serious dangers that could emerge from advanced AI chatbots. Along with the responsibility comes a headline-grabbing compensation package of $555,000 plus equity. In a public post announcing the opening, Sam Altman called it "a critical role at an important time," noting that while AI models are now capable of "many great things," they are also "starting to present some real challenges." What the Head of Preparedness will actually do The person holding this position will focus on extreme but realistic AI risks, including misuse, cybersecurity threats, biological concerns, and broader societal harm. Sam Altman said OpenAI now needs a "more nuanced understanding" of how growing capabilities could be abused without blocking the benefits. Recommended Videos He also did not sugarcoat the job. "This will be a stressful job," Altman wrote, adding that whoever takes it on will be jumping "into the deep end pretty much immediately." The hire comes at a sensitive moment for OpenAI, which has faced growing regulatory scrutiny over AI safety in the past year. That pressure has intensified amid allegations linking ChatGPT interactions to several suicide cases, raising broader concerns about AI's impact on mental health. In one case, parents of a 16-year-old sued OpenAI after alleging the chatbot encouraged their son to plan his own suicide, prompting the company to roll out new safety measures for users under 18. Another lawsuit claims ChatGPT fueled paranoid delusions in a separate case that ended in murder and suicide, leading OpenAI to say it is working on better ways to detect distress, de-escalate conversations, and direct users to real-world support. OpenAI's safety push comes at a time when millions report emotional reliance on ChatGPT and regulators are probing risks for children, underscoring why preparedness matters beyond just engineering.
[9]
OpenAI is paying $555,000 to hire a head of 'preparedness'
OpenAI is offering $555,000 plus equity for a Head of Preparedness, which is either the industry growing up or the industry hiring someone to stand closer to the blast radius. But any anxiety will be well-compensated. The job listing reads like a grown-up version of the "imagine the worst-case scenario" exercise -- except the worst case has a line item, a launch calendar, and a board committee. The role is, on paper, less about moral philosophy and more about industrial plumbing. You'd run the technical strategy and execution of OpenAI's Preparedness Framework, coordinating capability evaluations, threat models, and mitigations into what the posting calls an "operationally scalable safety pipeline." The company's pitch is that this person will "track" frontier capabilities that create "new risks of severe harm," which is corporate language for: figure out what breaks, who gets hurt, and whether it's safe to ship anyway. The work spans the risk domains OpenAI now treats as the high-stakes basics: owning evaluations, threat models, and safeguards that directly shape launch decisions -- across cybersecurity, biological and chemical, and AI self-improvement -- with "policy monitoring and enforcement" in the loop.e When AI safety people complain that companies talk about danger like it's an opinion, not an engineering constraint, this is role is OpenAI's counterpitch -- an org chart, with a directly responsible individual. Someone has to run the factory floor where frontier models get stress-tested for severe-harm risks, matched with threat models, and shipped only with mitigations sturdy enough to survive contact with the internet. CEO Sam Altman promised that whoever takes it will "jump into the deep end pretty much immediately," and he's out selling this as a "stressful job" for a reason -- he's publicly pointing to models that can mess with mental health and models that are getting good enough at computer security to surface "critical vulnerabilities," which is the kind of combo that doesn't stay trapped in a system card; it leaks into lawsuits, safety rollbacks, and "why didn't anyone stop this?" meetings. The stakes are high, the timeline is faster, and the margin for "we'll patch it later" is shrinking. The risk list has gotten both more banal (job loss, misinformation) and more nightmare-adjacent (cyber misuse, bio release questions, self-improving systems, the slow erosion of human agency), and the internal politics aren't exactly subtle: former safety leader Jan Leike wrote in 2024 that "safety culture and processes have taken a backseat to shiny products." But even OpenAI's updated framework leaves room to "adjust" safety requirements if a rival ships a high-risk model without similar protections -- which is Silicon Valley admitting, in corporate-legal English, that safety is now part of the race, not a referee standing outside it. A recent Pew poll found that 50% of Americans are more concerned than excited about AI's growing role in daily life -- up from 37% in 2021 -- and that 57% rate AI's societal risks as high, versus 25% who say the benefits are high. Gallup, meanwhile, reports that 80% of U.S. adults want the government to maintain rules for AI safety and data security even if it slows development and that trust is thin; only 2% "fully" trust AI to make fair, unbiased decisions, while 60% distrust it (somewhat or fully). OpenAI has spent months publicly tightening how ChatGPT behaves in sensitive situations -- explicitly naming "psychosis or mania," "self-harm and suicide," and "emotional reliance on AI" as focus areas for safety improvements. Meanwhile, the broader public conversation has turned grim and specific: A Washington Post investigation described multiple wrongful-death lawsuits alleging that ChatGPT responses contributed to suicides, while also noting OpenAI's claim that users bypassed guardrails and that the company pointed people to crisis resources. Chatbots have drifted into the therapy-adjacent lane -- sometimes by design, sometimes by user need -- and that's where "psychosis or mania," self-harm, and emotional reliance stop being edge cases and start looking like product risk. When your product sits in the same mental category as "someone I talk to when I'm spiraling," the safety bar stops being theoretical -- and the PR strategy of smiling through it starts to look like malpractice insurance. Preparedness is OpenAI's attempt to make that bar not such a reach. In the Preparedness Framework v2, OpenAI defines "severe harm" as outcomes on the scale of "the death or grave injury of thousands of people" or "hundreds of billions of dollars of economic damage." The company narrowed its "Tracked Categories" to biological and chemical capability, cybersecurity, and AI self-improvement, while moving some areas into "Research Categories." And OpenAI also spells out internal governance: a Safety Advisory Group makes recommendations, leadership can accept or reject them, and a board safety committee provides oversight. OpenAI's safety org has been through visible churn. After OpenAI formed a "preparedness" team in 2023, the company reassigned its former head of preparedness, Aleksander Madry, in July 2024, with execs stepping in to cover the role. (The current Head of Recruiting, Joaquin Quiñonero Candela, previously served as Head of Preparedness.) And the company has lived with reputational damage from internal critics. Hiring a high-profile "preparedness" lead is, at minimum, OpenAI trying to look like the kind of place that can ship frontier models without outsourcing the hard questions to a blog post and an apology drafted at 2 a.m. In Silicon Valley, "preparedness" is the stage of ambition where the product starts to come with disclaimers and the disclaimers start to need leadership. This hire is a promise that the company can build guardrails that hold under gravity; the public, increasingly, is treating that promise like it should come with penalties.
[10]
OpenAI is hiring a 'head of preparedness' with a $550,000 salary to mitigate AI dangers that CEO Sam Altman warns will be 'stressful' | Fortune
OpenAI is looking for a new employee to help address the growing dangers of AI, and the tech company is willing to spend more than half a million dollars to fill the role. OpenAI is hiring a "head of preparedness" to reduce harms associated with the technology, like user mental health and cybersecurity, CEO Sam Altman wrote in an X post on Saturday. The position will pay $555,000 per year, plus equity, according to the job listing. "This will be a stressful job and you'll jump into the deep end pretty much immediately," Altman said. OpenAI's push to hire a safety executive comes amid companies' growing concerns about AI risks on operations and reputations. A November analysis of annual Securities and Exchange Commission filings by financial data and analytics company AlphaSense found that in the first 11 months of the year, 418 companies worth at least $1 billion cited reputational harm associated with AI risk factors. These reputation-threatening risks include AI datasets that show biased information or jeopardize security. Reports of AI-related reputational harm increased 46% from 2024, according to the analysis. "Models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges," Altman said in the social media post. "If you want to help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can't use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve, please consider applying," he added. OpenAI's previous head of preparedness Aleksander Madry was reassigned last year to a role related to AI reasoning, with AI safety a related part of the job. Founded in 2015 as a nonprofit with the intention to use AI to improve and benefit humanity, OpenAI has, in the eyes of some of its former leaders, struggled to prioritize its commitment to safe technology development. The company's former vice president of research, Dario Amodei, along with his sister Daniela Amodei and several other researchers, left OpenAI in 2020, in part because of concerns the company was prioritizing commercial success over safety. Amodei founded Anthropic the following year. OpenAI has faced multiple wrongful death lawsuits this year, alleging ChatGPT encouraged users' delusions, and claiming conversations with the bot were linked to some users' suicides. A New York Times investigation published in November found nearly 50 cases of ChatGPT users having mental health crises while in conversation with the bot. OpenAI said in August its safety features could "degrade" following long conversations between users and ChatGPT, but the company has made changes to improve how its models interact with users. It created an eight-person council earlier this year to advise the company on guardrails to support users' wellbeing and has updated ChatGPT to better respond in sensitive conversations and increase access to crisis hotlines. At the beginning of the month, the company announced grants to fund research about the intersection of AI and mental health. The tech company has also conceded to needing improved safety measures, saying in a blog post this month some of its upcoming models could present a "high" cybersecurity risk as AI rapidly advances. The company is taking measures -- such as training models to not respond to requests compromising cybersecurity and refining monitoring systems -- to mitigate those risks. "We have a strong foundation of measuring growing capabilities," Altman wrote on Saturday. "But we are entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused, and how we can limit those downsides both in our products and in the world, in a way that lets us all enjoy the tremendous benefits."
[11]
OpenAI says it's hiring a head safety executive to mitigate AI risks
Mary Cunningham is a reporter for CBS MoneyWatch. She previously worked at "60 Minutes," CBSNews.com and CBS News 24/7 as part of the CBS News Associate Program. OpenAI is seeking a new "head of preparedness" to guide the company's safety strategy amid mounting concerns over how artificial intelligence tools could be misused. According to the job posting, the new hire will be paid $555,000 to lead the company's safety systems team, which OpenAI says is focused on ensuring AI models are "responsibly developed and deployed." The head of preparedness will also be tasked with tracking risks and developing mitigation strategies for what OpenAI calls "frontier capabilities that create new risks of severe harm." "This will be a stressful job and you'll jump into the deep end pretty much immediately," CEO Sam Altman wrote in an X post describing the position over the weekend. He added, "This is a critical role at an important time; models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges." OpenAI did not immediately respond to a request for comment. The company's investment in safety efforts comes as scrutiny intensifies over artificial intelligence's influence on mental health, following multiple allegations that OpenAI's chatbot, ChatGPT, was involved in interactions preceding a number of suicides. In one case earlier this year covered by CBS News, the parents of a 16-year-old sued the company, alleging that ChatGPT encouraged their son to plan his own suicide. That prompted OpenAI to announce new safety protocols for users under 18. ChatGPT also allegedly fueled what a lawsuit filed earlier this month described as the "paranoid delusions" of a 56-year-old man who murdered his mother and then killed himself. At the time, OpenAI said it was working on improving its technology to help ChatGPT recognize and respond to signs of mental or emotional distress, de-escalate conversations and guide people toward real-world support. Beyond mental health concerns, worries have also increased over how artificial intelligence could be used to carry out cybersecurity attacks. Samantha Vinograd, a CBS News contributor and former top Homeland Security official in the Obama administration, addressed the issue on CBS News' "Face the Nation with Margaret Brennan" on Sunday. "AI doesn't just level the playing field for certain actors," she said. "It actually brings new players onto the pitch, because individuals, non-state actors, have access to relatively low-cost technology that makes different kinds of threats more credible and more effective." Altman acknowledged the growing safety hazards AI poses in his X post, writing that while the models and their capabilities have advanced quickly, challenges have also started to arise. "The potential impact of models on mental health was something we saw a preview of in 2025; we are just now seeing models get so good at computer security they are beginning to find critical vulnerabilities," he wrote. Now, he continued, "We are entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused, and how we can limit those downsides ... in a way that lets us all enjoy the tremendous benefits." According to the job posting, a qualified applicant would have "deep technical expertise in machine learning, AI safety, evaluations, security or adjacent risk domains" and have experience with "designing or executing high-rigor evaluations for complex technical systems," among other qualifications. OpenAI first announced the creation of a preparedness team in 2023, according to TechCrunch.
[12]
OpenAI hiring senior preparedness lead as AI safety scrutiny grows - SiliconANGLE
OpenAI hiring senior preparedness lead as AI safety scrutiny grows OpenAI Group PBC is looking to hire a head of preparedness, a senior safety role tasked with anticipating potential harms from the company's artificial intelligence models and guiding how those risks are mitigated as capabilities advance. According to a job listing published on OpenAI's careers site, the role will lead the technical strategy and execution of OpenAI's Preparedness Framework, which the company uses to track and assess frontier capabilities that could create new or severe risks. The risks to be monitored include misuse scenarios, cybersecurity threats and other impacts that may emerge as models become more capable. The head of preparedness will sit within OpenAI's Safety Systems organization and work across research, policy and product teams. Responsibilities include developing threat models, running capability evaluations, setting risk thresholds and determining when additional safeguards or deployment restrictions are required. The work feeds directly into decisions around whether and how new models and features are released. OpenAI describes the role, which is offering a base salary of $550,000 plus equity, as demanding and requiring experience in large-scale technical systems, security, risk analysis or safety governance, along with the ability to translate research findings into operational controls. The opening comes after a period of change within OpenAI's safety leadership. The company's former Head of Preparedness Aleksander Madry was reassigned in mid-2024, with preparedness responsibilities subsequently overseen by senior executives Joaquin Quiñonero Candela and Lilian Weng. Weng later departed the company and earlier this year, Quiñonero Candela moved to lead recruiting at OpenAI, leaving the preparedness role without a dedicated permanent head. OpenAI CEO Sam Altman has previously pointed to preparedness as a core internal function as model capabilities expand. According to Engadget, Altman has previously referred to the head of preparedness as "a critical role at an important time," acknowledging challenges associated with model capabilities. The hiring effort comes amid heightened attention on how advanced AI systems may be abused or cause unintended harm. Areas of concern frequently cited in industry discussions include AI-assisted cyberattacks, the discovery and exploitation of software vulnerabilities and potential effects on users' mental health at scale. The mental health aspect has been raised by the company before, such as in October when OpenAI revealed that more than a million people per week reported experiencing severe mental distress in conversations with ChatGPT. The data did not suggest that ChatGPT necessarily caused the distress but rather that users were discussing serious mental health issues with ChatGPT.
[13]
OpenAI offers 555k salary for stressful head of preparedness role
OpenAI has initiated a search for a new "head of preparedness" to manage artificial intelligence risks, a position offering an annual salary of $555,000 plus equity, Business Insider reports. CEO Sam Altman described the role as "stressful" in an X post on Saturday, emphasizing its critical nature given the rapid improvements and emerging challenges presented by AI models. The company seeks to mitigate potential downsides of AI, which include job displacement, misinformation, malicious use, environmental impact, and the erosion of human agency. Altman noted that while models are capable of beneficial applications, they are also beginning to pose challenges such as impacts on mental health and the ability to identify critical cybersecurity vulnerabilities, referencing previewed issues from 2025 and current capabilities. ChatGPT, OpenAI's AI chatbot, has gained popularity among consumers for general tasks like research and drafting emails. However, some users have engaged with the bots as an alternative to therapy, which, in certain instances, has exacerbated mental health issues, contributing to delusions and other concerning behaviors. OpenAI stated in October it was collaborating with mental health professionals to enhance ChatGPT's interactions with users exhibiting concerning behavior, including psychosis or self-harm. OpenAI's founding mission centers on developing AI to benefit humanity, with safety protocols established early in its operations. Former staffers have indicated that the company's focus shifted towards profitability over safety as products were released. Jan Leiki, former leader of OpenAI's dissolved safety team, resigned in May 2024, stating on X that the company had "lost sight of its mission to ensure the technology is deployed safely." Leiki articulated that building "smarter-than-human machines is an inherently dangerous endeavor" and expressed concerns that "safety culture and processes have taken a backseat to shiny products." Another staffer resigned less than a week later, citing similar safety concerns. Daniel Kokotajlo, another former staffer, resigned in May 2024, citing a "losing confidence" in OpenAI's responsible behavior concerning Artificial General Intelligence (AGI). Kokotajlo later told Fortune that the number of personnel researching AGI-related safety issues had been nearly halved from an initial count of about 30. Aleksander Madry, the prior head of preparedness, transitioned to a new role in July 2024. The head of preparedness position, part of OpenAI's Safety Systems team, focuses on developing safeguards, frameworks, and evaluations for the company's models. The job listing specifies responsibilities including "building and coordinating capability evaluations, threat models, and mitigations that form a coherent, rigorous, and operationally scalable safety pipeline."
[14]
OpenAI Is Prepared to Pay Someone $555,000 -- Plus Equity -- for This 'Stressful Job'
The role involves anticipating the ways AI systems could go wrong. OpenAI is offering more than half a million dollars in salary to fill what could be one of the most stressful jobs in tech -- a new Head of Preparedness tasked with anticipating the ways powerful AI systems could go wrong. In a post on X on Saturday, OpenAI CEO Sam Altman advertised the position, which he called a "critical role at an important time" as well as a "stressful job." The role reflects how seriously the company takes the potential for harm from its own technologies. "If you want to help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can't use them for harm... please consider applying," Altman wrote in the post. OpenAI describes the Head of Preparedness in the job posting as the executive responsible for executing its preparedness framework, the internal system the company uses to track and prepare for "frontier capabilities," or new use cases that could create risks. That includes threats ranging from large-scale phishing attacks to more grave situations, such as AI systems contributing to nuclear or biological dangers. Related: Netflix Just Posted a Fully Remote Job That Pays $700K The role pays $555,000 in base salary plus equity. It requires a leader to build and run programs that test new models for dangerous capabilities before release. The executive will need to determine how OpenAI will respond if certain AI models exceed risk thresholds. They have to delay, restrict or redesign products that have the potential to cause serious harm. Altman acknowledged in his post on X that OpenAI's AI models are "starting to present some real challenges." He noted that some models are now "so good at computer security they are beginning to find critical vulnerabilities." Altman framed the Head of Preparedness job as a chance to "help the world figure out" how to empower cybersecurity officials with AI. At the same time, the person will be concerned with preventing attackers from weaponizing the same capabilities. Related: These Tesla Jobs Pay Up to $318,000 -- And You'll Have Meetings With Elon Musk The preparedness team is relatively new. OpenAI first formed it in 2023 to study the risks of cutting-edge AI models. Since then, OpenAI reassigned the previous Head of Preparedness, Aleksander Madry, to focus on AI reasoning. Other safety leaders have left the company or moved into non-safety roles. The new Head of Preparedness will also have to balance market realities and safety interests. OpenAI released an updated Preparedness Framework in April. In the update, the company added language that it might "adjust" its own safety requirements if a competing lab releases a "high-risk" model without safety protections. That statement highlights the competitive forces in advanced AI. Companies may feel pressure to respond quickly when rivals release powerful systems, even as they worry about the risks. Ready to explore everything on Entrepreneur.com? December is your free pass to Entrepreneur+. Enjoy complete access, no strings attached. Claim your free month.
[15]
OpenAI Wants a Head of Preparedness for AI Safety, Will Pay Big Bucks
The individual should be an expert in machine learning and AI safety OpenAI is looking for a Head of Preparedness who will help the artificial intelligence (AI) company anticipate threats that can emerge from its AI models and plan ways to mitigate them. The company is also willing to pay a massive payout for the role in both cash and equity. OpenAI CEO Sam Altman called the position critical to the company's functioning, highlighting that the employee will be focused on evaluating frontier models before they are released publicly. Notably, this comes at a time when the company is facing multiple lawsuits alleging ChatGPT's role in encouraging users to commit murder and suicide. OpenAI Wants a Head of Preparedness to Improve AI Safety In a new listing on its careers page, the AI giant said it is looking for a Head of Preparedness. It is a senior role in the Safety Systems team, which "ensures that OpenAI's most capable models can be responsibly developed and deployed." The role is based in San Francisco and comes with a listed annual salary of up to $555,000 (roughly Rs. 4.99 crore) plus equity. OpenAI CEO shared the listing on X (formerly known as Twitter), noting that the position is critical as model capabilities continue to grow quickly. In his post, Altman described it as a "stressful job" that will involve diving into complex safety challenges immediately upon joining the company. The official job description says the Head of Preparedness will lead the technical strategy and execution of the company's Preparedness framework, which is part of the broader Safety Systems organisation. The role will be responsible for building and coordinating capability evaluations, establishing detailed threat models, and developing mitigations that form "a coherent, rigorous, and operationally scalable safety pipeline." Core responsibilities listed in the job description include designing precise and robust capability evaluations for rapid model development cycles, creating threat models across multiple risk domains, and overseeing mitigation strategies that align with identified threats. The role demands deep technical judgement and clear communication to guide complex work across the safety organisation. OpenAI opened this new position at a time when its AI systems have come under scrutiny for allegedly causing unintended harmful impact on users and the broader world. Recently, ChatGPT was implicated in a teenager's suicide, and a 55-year-old man's murder-suicide. The company's AI browser, ChatGPT Atlas, is also said to be vulnerable to prompt injections.
[16]
OpenAI seeks candidate for a 'stressful' role; offers over $555,000 a year - The Economic Times
OpenAI is hiring for a role aimed at reducing the risks that come with artificial intelligence (AI) systems. The position, titled 'head of preparedness', carries a compensation of more than $5,55,000 a year, plus equity, highlighting the importance the company places on the job. OpenAI chief Sam Altman said in a post on X that this was a "critical role" at a pivotal moment for the company. While AI models are advancing rapidly and delivering major benefits, he said they are also beginning to create serious challenges. "The potential impact of models on mental health was something we saw a preview of in 2025; we are just now seeing models get so good at computer security they are beginning to find critical vulnerabilities," Altman wrote. Altman warned that the job would not be easy. He also called it "stressful," and added: "You'll jump into the deep end pretty much immediately." In a separate blog post, OpenAI said the person would be responsible for putting its preparedness framework into action. This outlines how the company tracks and prepares for advanced frontier capabilities that could cause severe harm. "You will be directly responsible for building and coordinating capability evaluations, threat models, and mitigations that form a coherent, rigorous, and operationally scalable safety pipeline." OpenAI first announced the creation of its preparedness team in 2023. At the time, it said the group would study potential "catastrophic risks", ranging from near-term threats such as phishing attacks to more speculative dangers, including nuclear risk. In July 2024, OpenAI moved its then head of preparedness, Aleksander Madry, to a role focussed on AI reasoning. Originally, the company's core mission was to develop AI that benefits all of humanity, with safety built into its operations from the start. However, according to Business Insider, some former employees have said that as OpenAI began rolling out products and faced growing pressure to make money, safety concerns were sometimes pushed aside. Altman concluded his post saying: "If you want to help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can't use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve, please consider applying."
[17]
Sam Altman Says OpenAI Is Hiring A Head Of Preparedness As AI Risks Grow
Enter your email to get Benzinga's ultimate morning update: The PreMarket Activity Newsletter OpenAI CEO Sam Altman said the company is seeking a Head of Preparedness, a role focused on addressing the growing challenges posed by advanced AI models. Altman Highlights Urgent Need For AI Safety Role Altman announced the job opening on X, emphasizing the critical nature of this role at a time when AI models are advancing rapidly and presenting new challenges. The American entrepreneur wrote, "This is a critical role at an important time; models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges." Altman noted out that among the issues that must be addressed are the models' possible effects on mental health and their increasing ability to spot crucial computer security flaws. The official notice said the role would involve expanding, strengthening and guiding OpenAI's Preparedness program, ensuring safety standards keep pace with the company's evolving AI capabilities. See Also: Gene Munster Argues OpenAI Is 'Undervalued' Even At $830 Billion As Losses Mount And Big Tech Doubles Down OpenAI Expands As Growth Accelerates The hiring push comes as OpenAI continues to expand its footprint in the AI sector. In July, the company acquired the full team from AI startup Crossing Minds as part of its broader effort to ensure artificial general intelligence benefits humanity and aligns with human values. The U.S.-based AI firm also acquired the AI-powered personal finance app Roi, with only the startup's CEO coming on board post-acquisition. OpenAI has also drawn attention for its rapid growth, with reports that the company is considering a $100 billion fundraising round at a valuation of up to $750 billion, a move that could precede one of the largest IPOs in history, though Altman has said he is "0% excited" about the idea of being a public company CEO. Read Next: Google Rolling Out Gmail Address Change Feature: Here Is How It Works Photo courtesy: jamesonwu1972/Shutterstock Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs
[18]
OpenAI Seeks AI Safety-Focused 'Head of Preparedness' | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. Writing in an X post Saturday (Dec. 27), CEO Sam Altman said that the new "Head of Preparedness" role comes amid the rise of new challenges related to artificial intelligence (AI). "The potential impact of models on mental health was something we saw a preview of in 2025; we are just now seeing models get so good at computer security they are beginning to find critical vulnerabilities," Altman wrote. His comments were flagged in a report by TechCrunch, which also noted that the company's listing for the job describes the role as being in charge of preparing the company's framework to explain its "approach to tracking and preparing for frontier capabilities that create new risks of severe harm." The report added that OpenAI first launched a preparedness team in 2023, saying it would be charged with studying potential "catastrophic risks," ranging from immediate threats like phishing or theoretical issues like nuclear attacks. However, TechCrunch added the company has since reassigned Head of Preparedness Aleksander Madry to a job focused on AI reasoning and seen other safety executives depart the startup or take on different roles unrelated to safety. The news comes weeks after OpenAI said it would add new safeguards to its AI models in response to rapid advancements across the industry. Those developments, the company said, create benefits for cyberdefense, while bringing dual-use risks. That means they could be used for malicious purposes as well as defensive ones. To illustrate the advancements in AI models, the company on its blog cited capture-the-flag challenges that showed the capabilities of OpenAI's models improving from 27% on GPT-5 in August to 76% on GPT-5.1-Codex-Max in November. "We expect that upcoming AI models will continue on this trajectory; in preparation, we are planning and evaluating as though each new model could reach 'High' levels of cybersecurity capability, as measured by our Preparedness Framework," OpenAI wrote. PYMNTS reported last month that AI has become both a tool and a target for cybersecurity. The PYMNTS Intelligence report "From Spark to Strategy: How Product Leaders Are Using GenAI to Gain a Competitive Edge" found that a little more than three-quarters (77%) of chief product officers using generative AI for cybersecurity said it still requires human oversight. OpenAI also instituted new parental controls earlier this year and announced plans for an automated age-prediction system. PYMNTS reported at the time that the new measures followed a lawsuit from the parents of a teenager who died by suicide, accusing OpenAI's ChatGPT chatbot of encouraging the boy's actions.
[19]
Sam Altman Reveals OpenAI's Head of Preparedness for AI Risk Control
OpenAI has taken a major step forward to curb the risks of AI misuse, cyberattacks, and negative societal impacts in 2026. The tech giant will recruit a Head of Preparedness whose sole responsibility includes devising a long-term strategy for AI safety, risk mitigation, and governance as technologies become more powerful and self-sufficient. OpenAI's CEO, Sam Altman, announced the new position in a recent post, saying the company is entering a phase where existing safety evaluations are no longer enough. As evolve, new risks, such as vulnerabilities in cybersecurity and the potential to influence human behavior, are beginning to surface. These developments, Altman noted, require a reevaluation of how risks are measured and managed. wrote, "We have a strong foundation of measuring growing capabilities, but we are entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused, and how we can limit those downsides both in our products and in the world, in a way that lets us all enjoy the tremendous benefits," He further added, "these questions are hard and there is little precedent; a lot of ideas that sound good have some real edge cases."
[20]
Head of Preparedness: OpenAI's new critical safety role and why its important
Inside OpenAI's safety strategy and the new Head of Preparedness OpenAI is searching for a new executive to fill one of the most demanding positions in the technology industry. The company is hiring a Head of Preparedness, a role situated within its Safety Systems team. This position is distinct from standard trust and safety jobs that moderate content or fix chatbot errors. Instead, this leader will be responsible for predicting and preventing catastrophic risks before they ever reach the public. Also read: Benefits of student ID: Top 10 discounts on tech products and services worth lakhs of rupees The Head of Preparedness is tasked with looking into the future of artificial intelligence capabilities. While other safety teams might focus on immediate issues like bias or incorrect facts, this role focuses on "frontier risks." These are hypothetical but plausible scenarios where advanced AI could cause severe harm. The person in this seat leads the technical strategy to evaluate whether a new model could assist in cyberattacks, provide instructions for biological threats, or manipulate human behavior on a mass scale. They are responsible for designing the scientific tests, known as evaluations, that attempt to break the model and expose these dangers during the development phase. The importance of this position lies in its authority to govern the release of new technology. The role operationalizes safety by turning it from a vague concept into a rigorous process with clear metrics. The Head of Preparedness oversees the "Preparedness Framework," which acts as OpenAI's internal constitution for risk management. This framework categorizes risks from Low to Critical. Also read: Cursor CEO thinks vibe coding is the biggest threat to modern software This executive serves as a vital check and balance within the company structure. If the evaluations determine that a model poses a "High" or "Critical" risk in areas like national security or persuasion, the Head of Preparedness effectively holds the power to stop the launch. Their team ensures that safety is not just a box to check but a hard requirement that dictates whether a product can be deployed. The search for a Head of Preparedness comes at a pivotal moment as the industry shifts from simple chatbots to complex reasoning models and agents. As AI systems become capable of writing code, browsing the web autonomously, and solving multi-step problems, the potential for misuse grows significantly. The reactive approach of releasing software and patching it later is no longer sufficient for models that approach the level of Artificial General Intelligence. OpenAI is hiring this role to build a permanent infrastructure for safety that can scale alongside the rapid pace of innovation. By institutionalizing this level of caution, the company acknowledges that the next generation of AI requires a dedicated leader to navigate the unknown territories of advanced machine intelligence.
[21]
Sam Altman says OpenAI is hiring a senior leader to manage AI risks, paying over $500,000
The position pays around $555,000 annually plus equity and sits within OpenAI's Safety Systems team. OpenAI has been in the headlines for offering big compensation packages to its employees. Now, the ChatGPT maker is looking for a new employee, offering more than $500,000 a year for a senior leadership role focused on managing the growing risks posed by advanced artificial intelligence systems, according to CEO Sam Altman. The role, as described by Altman, is stressful and demanding. Taking to X, Altman said the new head of preparedness will be thrown into the deep end almost immediately, calling it an important position at a time when AI capabilities are accelerating rapidly. The role sits within OpenAI's Safety Systems team and is tasked with identifying, evaluating, and mitigating potential threats linked to AI deployment. Altman also mentioned emerging concerns, including AI's impact on mental health and the growing ability to uncover serious security vulnerabilities. While modern AI systems are delivering major benefits, he noted that they are also beginning to create real-world challenges that require proactive oversight. Also read: iPhone 17 Pro, 17, 16, iPad Air, Watch Ultra 3 get price cut during Apple Days Sale The job opening comes at a time when OpenAI continues to face scrutiny over how its products are being used. ChatGPT, now among the most used chatbots for everyday tasks such as writing, research, and planning, has also been used by some individuals as a substitute for therapy. This has raised concerns about harmful interactions, including the reinforcement of delusions or self-harm tendencies. Previously, the company confirmed that it is working with mental health experts to safeguard in such cases. The one who will be hired for this position will be the replacement for Aleksander Madry, the company's former head of preparedness, who moved into a different role last year. As per the job listing, the new hire will lead capability evaluations, threat modelling and safety mitigations designed to scale along with OpenAI's advanced models. The role reportedly offers an annual salary of around $555,000, along with equity.
Share
Share
Copy Link
OpenAI is hiring a new Head of Preparedness to manage emerging AI risks, from cybersecurity vulnerabilities to mental health impacts. Sam Altman acknowledged that rapidly improving AI models are creating real challenges, offering a $555,000 salary for what he describes as a high-stress role with immediate demands.
OpenAI is searching for a new executive to lead its preparedness efforts as AI models advance at an unprecedented pace. CEO Sam Altman announced the opening in a post on X, acknowledging that AI models are "starting to present some real challenges" that require closer oversight
1
. The position comes with a base salary of $555,000 plus equity, but Altman warned candidates that "this will be a stressful job and you'll jump into the deep end pretty much immediately"4
.
Source: Digit
The Head of Preparedness role focuses on executing OpenAI's preparedness framework, which the company describes as its approach to tracking and preparing for frontier capabilities that create new risks of severe harm
1
. The successful candidate will be responsible for building and coordinating capability evaluations, threat models, and mitigations that form a coherent safety pipeline2
.Sam Altman specifically highlighted two critical areas where mitigating risks from AI models has become urgent. First, he noted that models are "so good at computer security they are beginning to find critical vulnerabilities"
1
. This concern is backed by industry reports, including one from Anthropic last month detailing how a Chinese state-sponsored group manipulated its Claude Code tool to attempt infiltration of roughly thirty global targets, including tech companies, financial institutions, and government agencies3
.
Source: Digit
The potential impact on mental health represents another significant challenge. Altman stated that OpenAI saw "a preview of model-related mental health impacts in 2025," though he didn't elaborate on specific cases
4
. Recent lawsuits allege that ChatGPT reinforced users' delusions, increased social isolation, and even contributed to suicides1
. OpenAI rolled back a GPT-4o update in April 2025 after acknowledging it had become overly sycophantic and could reinforce harmful user behavior4
.The Head of Preparedness position has seen more turnover than stability since its creation. OpenAI first announced the preparedness team in 2023 to study potential catastrophic risks, ranging from immediate threats like phishing attacks to more speculative dangers such as nuclear threats
1
. Aleksander Madry, director of MIT's Center for Deployable Machine Learning, held the role until July 2024, when OpenAI reassigned him to a reasoning-focused research position4
.Following Madry's departure, OpenAI appointed Joaquin Quinonero Candela and Lilian Weng to lead the preparedness team. Neither lasted long in the position. Weng left OpenAI in November 2024, while Candela transitioned out of preparedness in April for a three-month coding internship before moving to head of recruiting
4
. Other safety executives at OpenAI have also left the company or taken on new roles outside of preparedness and AI safety1
.Related Stories
The new Head of Preparedness will oversee critical areas beyond cybersecurity and mental health. According to the job specification, biosecurity represents another major risk area the candidate will address
3
. In this context, biosecurity concerns include advanced AI systems being used to design bioweapons, a risk more than 100 scientists from universities and organizations worldwide have warned about3
.Altman emphasized that the role involves setting guardrails for self-improving systems and securing AI models for the release of biological capabilities
2
. The company recently updated its preparedness framework, stating it might "adjust" its safety requirements if a competing AI lab releases a high-risk model without similar protections1
.The urgency behind OpenAI hiring Head of Preparedness reflects broader tensions within the company about balancing rapid innovation with responsible development. Former employees have criticized OpenAI for prioritizing commercial opportunities and AGI goals over safety considerations
5
. One executive who left in October called out the company for not focusing enough on safety and the long-term effects of its AGI push4
.
Source: Fortune
Reports about burnout at OpenAI have been building, with former technical team executives describing a secretive, high-pressure environment and anecdotal reports of 12-hour days
3
. Whether the $555,000 compensation package will be enough to attract and retain talent in this demanding role remains uncertain, especially given the position's track record.As large language models continue advancing and demonstrating frontier capabilities, the need for robust safety evaluations and threat models becomes more pressing. The successful candidate will need to develop a nuanced understanding of how capabilities could be abused while enabling cybersecurity defenders with cutting-edge tools and ensuring attackers cannot exploit them for harm
1
. This balance will prove critical as OpenAI navigates the complex landscape of AI risks while pursuing its mission to develop artificial general intelligence.Summarized by
Navi
[4]
27 Jan 2026•Business and Economy

24 Jul 2024

03 Dec 2025•Technology

1
Business and Economy

2
Technology

3
Technology
