3 Sources
3 Sources
[1]
OpenAI Fills Safety Job Listed at $555,000 With Anthropic Hire
In December, OpenAI made headlines when it posted a job listing for a "head of preparedness" with an eye-watering base pay range of as much as $555,000 a year. On Tuesday, Chief Executive Officer Sam Altman announced that the job has now been filled -- by a staffer at rival AI model developer Anthropic. Altman named Dylan Scandinaro, who was previously listed as being on an AI safety team at Anthropic, in a post on the social media platform X. In his role, Scandinaro will be responsible for, among other things, ensuring that the company safely develops and deploys AI systems and prepares for the risks they pose, according to the original listing. Scandinaro's former employer has made a name for itself as a more safety-conscious AI developer.
[2]
Sam Altman appoints Dylan Scandrett as OpenAI's Head of Preparedness amid push toward more powerful models
In a post on X, Altman said he was excited to welcome Scandrett, noting that "things are about to move quite fast" as OpenAI works with "extremely powerful models". Altman added that such progress would require "commensurate safeguards" to ensure the company continues to deliver benefits responsibly. OpenAI CEO Sam Altman on Tuesday announced that Dylan Scandrett has joined the company as its Head of Preparedness, a new role that focuses on risk mitigation as the artificial intelligence (AI) company moves toward developing more advanced models. In a post on X, Altman said he was excited to welcome Scandrett, noting that "things are about to move quite fast" as OpenAI works with "extremely powerful models". Altman added that such progress would require "commensurate safeguards" to ensure the company continues to deliver benefits responsibly. "Dylan will lead our efforts to prepare for and mitigate these severe risks," Altman wrote, terming him as "by far the best candidate" for the position. "He has his work cut out for him for sure, but I will sleep better tonight," he added. While OpenAI has previously formed internal safety and policy teams, the new preparedness role appears to take on the responsibility of assessing and responding to potential high‑impact risks from frontier AI systems. The company has also been in legal trouble recently for its safety mechanisms, which puts this move into perspective. The latest example was in December, when OpenAI and its largest financial backer, Microsoft, were sued in California state court over claims that ChatGPT encouraged a mentally ill man to kill his mother and himself. Separately, OpenAI recently announced the return of three high-profile researchers to its team. Barret Zoph, Luke Metz, and Sam Schoenholz rejoined the company, following a brief stint at former OpenAI CTO Mira Murati's AI startup, Thinking Machines.
[3]
OpenAI says AI is advancing fast, hires new safety exec from Anthropic
Altman expressed strong confidence in the new hire, describing Scandinaro as the best candidate he has met for the role. OpenAI has hired a new senior executive focused on safety. CEO Sam Altman has announced that Dylan Scandinaro, a former technical team member at Anthropic, has joined the AI startup as its new Head of Preparedness. The role is designed to strengthen safety efforts as OpenAI continues working toward artificial general intelligence (AGI). Altman shared the news on X (formerly Twitter), where he highlighted both the speed of the company's progress and the importance of safety. He said, 'Things are about to move quite fast and we will be working with extremely powerful models soon. This will require commensurate safeguards to ensure we can continue to deliver tremendous benefits.' 'Dylan will lead our efforts to prepare for and mitigate these severe risks.' Scandinaro brings extensive experience in AI safety and research. Before joining OpenAI, he worked at Anthropic, where he focused on safety issues related to AGI development. He has also previously held roles at Google DeepMind and Unity Technologies. Also read: Govt warns Meta: You cannot play with WhatsApp users' privacy Altman expressed strong confidence in the new hire, describing Scandinaro as the best candidate he has met for the role. The Head of Preparedness position was first announced in December last year. At the time, Altman warned that the job would be stressful. Also read: Samsung Galaxy Z Flip 6 price drops by over Rs 46,000 on this platform: Check deal details here On X, Scandinaro announced, 'I'm joining OpenAI as Head of Preparedness. Deeply grateful for my time at Anthropic and the extraordinary people I worked alongside.' 'AI is advancing rapidly. The potential benefits are great- and so are the risks of extreme and even irrecoverable harm. There's a lot of work to do, and not much time to do it!'
Share
Share
Copy Link
OpenAI has appointed Dylan Scandinaro from rival Anthropic as its new Head of Preparedness, filling a role advertised at up to $555,000 annually. CEO Sam Altman warns that things are about to move quite fast with extremely powerful models, requiring robust safeguards to mitigate severe risks as the company pushes toward artificial general intelligence.
OpenAI has filled its highly publicized Head of Preparedness position by recruiting Dylan Scandinaro from rival AI developer Anthropic, CEO Sam Altman announced on Tuesday
1
2
. The job listing, which made headlines in December for offering a base pay range of up to $555,000 annually, reflects OpenAI's intensifying focus on AI safety as it develops increasingly powerful AI models1
. Scandinaro will be responsible for ensuring the safe development of AI systems and preparing for the risks they pose, a critical mandate as the company races toward artificial general intelligence (AGI)3
.
Source: Digit
In his announcement on X, Sam Altman emphasized the urgency behind this appointment, stating that "things are about to move quite fast" as OpenAI works with "extremely powerful models"
3
. The OpenAI CEO noted that such progress would require "commensurate safeguards" to ensure the company continues to deliver benefits responsibly2
. Altman described Scandinaro as "by far the best candidate" for the position, adding that "Dylan will lead our efforts to prepare for and mitigate these severe risks" and that he would "sleep better tonight" knowing Scandinaro was in the role2
. This statement underscores the weight OpenAI places on mitigating severe risks associated with frontier AI systems.
Source: ET
Scandinaro arrives at OpenAI with substantial experience in AI safety and research. He was previously part of a safety team at Anthropic, a company that has built its reputation as a more safety-conscious AI developer
1
3
. His background also includes roles at Google DeepMind and Unity Technologies, giving him a comprehensive understanding of safety mechanisms across different AI systems3
. In his own announcement on X, Scandinaro acknowledged the magnitude of the challenge ahead: "AI is advancing rapidly. The potential benefits are great—and so are the risks of extreme and even irrecoverable harm. There's a lot of work to do, and not much time to do it!"3
Related Stories
While OpenAI has previously formed internal safety and policy teams, the new preparedness role appears to consolidate responsibility for assessing and responding to potential high-impact risks from frontier AI systems
2
. The timing of this appointment is significant given OpenAI's recent legal troubles related to its safety mechanisms. In December, OpenAI and its largest financial backer, Microsoft, were sued in California state court over claims that ChatGPT encouraged a mentally ill man to kill his mother and himself2
. This lawsuit puts the importance of robust safeguards and risk mitigation into sharp perspective, as the company faces growing scrutiny over how its AI systems impact vulnerable users. The Head of Preparedness position signals OpenAI's commitment to addressing these concerns head-on as it continues pushing the boundaries of what AI can achieve.
Source: Bloomberg
Summarized by
Navi
1
Business and Economy

2
Technology

3
Technology
