OpenAI fills $555,000 safety role with Anthropic hire as AI advances rapidly

3 Sources

Share

OpenAI has appointed Dylan Scandinaro from rival Anthropic as its new Head of Preparedness, filling a role advertised at up to $555,000 annually. CEO Sam Altman warns that things are about to move quite fast with extremely powerful models, requiring robust safeguards to mitigate severe risks as the company pushes toward artificial general intelligence.

OpenAI Hires New Safety Exec to Lead Risk Mitigation Efforts

OpenAI has filled its highly publicized Head of Preparedness position by recruiting Dylan Scandinaro from rival AI developer Anthropic, CEO Sam Altman announced on Tuesday

1

2

. The job listing, which made headlines in December for offering a base pay range of up to $555,000 annually, reflects OpenAI's intensifying focus on AI safety as it develops increasingly powerful AI models

1

. Scandinaro will be responsible for ensuring the safe development of AI systems and preparing for the risks they pose, a critical mandate as the company races toward artificial general intelligence (AGI)

3

.

Source: Digit

Source: Digit

Rapid Progress in AI Demands Commensurate Safeguards

In his announcement on X, Sam Altman emphasized the urgency behind this appointment, stating that "things are about to move quite fast" as OpenAI works with "extremely powerful models"

3

. The OpenAI CEO noted that such progress would require "commensurate safeguards" to ensure the company continues to deliver benefits responsibly

2

. Altman described Scandinaro as "by far the best candidate" for the position, adding that "Dylan will lead our efforts to prepare for and mitigate these severe risks" and that he would "sleep better tonight" knowing Scandinaro was in the role

2

. This statement underscores the weight OpenAI places on mitigating severe risks associated with frontier AI systems.

Source: ET

Source: ET

Dylan Scandinaro from Anthropic Brings Safety-First Expertise

Scandinaro arrives at OpenAI with substantial experience in AI safety and research. He was previously part of a safety team at Anthropic, a company that has built its reputation as a more safety-conscious AI developer

1

3

. His background also includes roles at Google DeepMind and Unity Technologies, giving him a comprehensive understanding of safety mechanisms across different AI systems

3

. In his own announcement on X, Scandinaro acknowledged the magnitude of the challenge ahead: "AI is advancing rapidly. The potential benefits are great—and so are the risks of extreme and even irrecoverable harm. There's a lot of work to do, and not much time to do it!"

3

Preparedness Role Takes Center Stage Amid Legal and Safety Challenges

While OpenAI has previously formed internal safety and policy teams, the new preparedness role appears to consolidate responsibility for assessing and responding to potential high-impact risks from frontier AI systems

2

. The timing of this appointment is significant given OpenAI's recent legal troubles related to its safety mechanisms. In December, OpenAI and its largest financial backer, Microsoft, were sued in California state court over claims that ChatGPT encouraged a mentally ill man to kill his mother and himself

2

. This lawsuit puts the importance of robust safeguards and risk mitigation into sharp perspective, as the company faces growing scrutiny over how its AI systems impact vulnerable users. The Head of Preparedness position signals OpenAI's commitment to addressing these concerns head-on as it continues pushing the boundaries of what AI can achieve.

Source: Bloomberg

Source: Bloomberg

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo