7 Sources
7 Sources
[1]
OpenAI robotics lead Caitlin Kalinowski quits in response to Pentagon deal | TechCrunch
Hardware executive Caitlin Kalinowski announced today that in response to OpenAI's controversial agreement with the Department of Defense, she's resigned from her role leading the company's robotics team. "This wasn't an easy call," Kalinowski said in a social media post. "AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got." Kalinowski, who previously led the team building augmented reality glasses at Meta, joined OpenAI in November 2024. In her announcement today, she emphasized that the decision was "about principle, not people" and said she has "deep respect" for CEO Sam Altman and the OpenAI team. In a follow-up post on X, Kalinowski added, "To be clear, my issue is that the announcement was rushed without the guardrails defined. It's a governance concern first and foremost. These are too important for deals or announcements to be rushed." An OpenAI spokesperson confirmed Kalinowski's departure to TechCrunch. "We believe our agreement with the Pentagon creates a workable path for responsible national security uses of AI while making clear our red lines: no domestic surveillance and no autonomous weapons," the company said in a statement. "We recognize that people have strong views about these issues and we will continue to engage in discussion with employees, government, civil society and communities around the world." OpenAI's agreement with the Pentagon was announced just over a week ago, after discussions between the Pentagon and Anthropic fell through as the AI company tried to negotiate for safeguards preventing its technology from being used in mass domestic surveillance or fully autonomous weapons. The Pentagon subsequently designated Anthropic a supply-chain risk. (Anthropic said it will fight the designation in court; in the meantime, Microsoft, Google, and Amazon said they will continue to make Anthropic's Claude available to non-defense customers.) Then, OpenAI quickly announced a agreement of its own allowing its technology to be used in classified environments. As executives attempted to explain the deal on social media, the company described it as taking "a more expansive, multi-layered approach" that relies not just on contract language, but also technical safeguards, to protect red lines similar to Anthropic's. Nonetheless, the controversy appears to have damaged OpenAI's reputation among some consumers, with ChatGPT uninstalls surging 295% and Claude climbing to the top of the App Store charts. As of Saturday afternoon, Claude and ChatGPT remain the U.S. App Store's number one and number two free apps, respectively.
[2]
OpenAI Robotics head resigns after deal with Pentagon
March 7 (Reuters) - Caitlin Kalinowski, head of robotics and consumer hardware at OpenAI, announced her resignation on Saturday, citing concerns about the company's agreement with the Department of Defense. In a social media post on X, Kalinowski wrote that OpenAI did not take enough time before agreeing to deploy its AI models on the Pentagon's classified cloud networks. "AI has an important role in national security," Kalinowski posted. "But surveillance of β Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got." Reuters could not immediately reach Kalinowski for comment, but she wrote on X that while she has "deep respect" for OpenAI CEO Sam Altman and the team, the company announced the Pentagon deal "without the guardrails defined," she posted. "It's a governance concern first and foremost," Kalinowski wrote in β a subsequent X post. "These are too important for deals or announcements to be rushed." OpenAI said the day after the deal was struck that it includes additional safeguards to protect its use cases. The company on β Saturday reiterated that its "red lines" preclude use of its technology in domestic surveillance or autonomous weapons. "We recognize that people have strong views about these β issues and we will continue to engage in discussion with employees, government, civil society and communities around the world," the β company said in a statement to Reuters. Kalinowski joined OpenAI in 2024 after leading augmented reality hardware development at Meta Platforms (META.O), opens new tab. Reporting by Karen Brettell; Editing by Sergio Non and Franklin Paul Our Standards: The Thomson Reuters Trust Principles., opens new tab
[3]
OpenAI's head of robotics resigns following deal with the Department of Defense
OpenAI is going to need to find a new head of robotics. Caitlin Kalinowski, OpenAI's now-former head of robotics, posted on X that she was resigning from her role, while criticizing the company's haste in partnering with the Department of Defense without investigating proper guardrails. Kalinowski, who previously worked at Meta before leaving to join OpenAI in late 2024, wrote on X that "surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got." Responding to another post, the former OpenAI exec explained that "the announcement was rushed without the guardrails defined," adding that it was a "governance concern first and foremost." OpenAI confirmed Kalinowski's resignation and said in a statement to Engadget that the company understands people have "strong views" about these issues and will continue to engage in discussions with relevant parties. The company also explained in the statement that it doesn't support the issues that Kalinowski brought up. "We believe our agreement with the Pentagon creates a workable path for responsible national security uses of AI while making clear our red lines: no domestic surveillance and no autonomous weapons," the OpenAI statement read. Kalinowski's resignation may be the most high-profile fallout from OpenAI's decision to sign a deal with the Department of Defense. The decision came just after Anthropic refused to comply with lifting certain AI guardrails around mass surveillance and developing fully autonomous weapons. However, even OpenAI's CEO, Sam Altman, said that he would amend the deal with the Department of Defense to prohibit spying on Americans.
[4]
There Was Just an Unusually Unsettling Pentagon-Related Resignation at OpenAI
It wasn't an acrimonious departure, but this specific resignation coming right now might be cause for some soul-searching internally. It probably will not shock you to learn that someone just resigned from OpenAI and trumpeted their reasons far and wide on social media. You might not have even heard of Caitlin Kalinowski before this moment, and her online statement, which reads in part Γ’β¬ΕI have deep respect for Sam and the team, and IΓ’β¬β’m proud of what we built together,Γ’β¬ does not, in and of itself, make for riveting reading material. Amid the companyΓ’β¬β’s high-profile deal with the Pentagon that CEO Sam Altman acknowledged Γ’β¬Εlooked opportunistic and sloppy," it should come as no surprise that some employees are turned off. OpenAIΓ’β¬β’s rival, Anthropic, seems to be taking a victory lap, raking in record revenue after having successfully marketed itself as the AI company that wants to protect the world from mass surveillance and autonomous killer robots.Γ I mention all this because of KalinowskiΓ’β¬β’s specific role: she was the leader of the companyΓ’β¬β’s robotics division. So when Kalinowski says in her statement that Γ’β¬Εlethal autonomy without human authorizationΓ’β¬ deserved more deliberation than it got, she at least speaks with some knowledge of what kinds of AI-powered robots OpenAI has in mind. According to a Wired report from last year, anonymous leakers revealed that OpenAI had been seeking to hire humanoid robot specialists, or at least specialists in robots with partially human-inspired designs. It was also working toward creating AI algorithms that can Γ’β¬Εmake sense of the physical worldΓ’β¬ and seeking to Γ’β¬Εempower robots to navigate and perform tasks.Γ’β¬ Robotics-related job ads on the OpenAI website tend to begin with the line Γ’β¬ΕOur Robotics team is focused on unlocking general-purpose robotics and pushing towards AGI-level intelligence in dynamic, real-world settings.Γ’β¬Γ ThatΓ’β¬β’s not to say any of OpenAIΓ’β¬β’s goals in robotics were accomplished, but when the person in charge of the most Terminator-like technology at your company leaves, and strongly implies that itΓ’β¬β’s because leadership was a little too cozy with a guy who calls himself the Secretary of War, that might reasonably be considered a moment for those in the C-suite to do some soul-searching.
[5]
OpenAI's robotics chief quits over the Pentagon deal
Caitlin Kalinowski spent 16 months building OpenAI's physical AI programme. On Saturday, she said the company moved too fast on something too important. The week that began with Anthropic being blacklisted by the Pentagon and ended with OpenAI taking its contract has now claimed OpenAI's most senior hardware executive. Caitlin Kalinowski, who joined OpenAI in November 2024 to lead its robotics and consumer hardware division, announced her resignation on Saturday on X. Her statement was short, direct, and more candid than anything OpenAI itself has said about the deal. "AI has an important role in national security," she wrote. "But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got." In a subsequent post, she was more precise about the nature of the complaint. "It's a governance concern first and foremost," she wrote. "These are too important for deals or announcements to be rushed." Kalinowski was careful to frame her departure in personal terms. "This was about principle, not people," she wrote. "I have deep respect for Sam and the team." That last note carries some weight: Sam Altman has himself acknowledged that the Pentagon deal was "definitely rushed," and that the rollout produced significant backlash. What Kalinowski's resignation adds to that admission is a name and a title: the most senior person at OpenAI, whose job was to bring AI into physical systems, has decided that the process by which it will now enter weapons systems and surveillance infrastructure was not good enough. The sequence of events that led here unfolded over roughly a week. Anthropic, which had been the only AI company cleared to operate on the Pentagon's classified networks, following a $200 million contract awarded in July 2025, spent several weeks in tense negotiations with the Defense Department over the terms of continued use. Anthropic's position was that its models should not be deployed for mass domestic surveillance or fully autonomous weapons. The Pentagon, under Defense Secretary Pete Hegseth, insisted on language permitting use "for all lawful purposes," without specific carve-outs. On 28 February, with negotiations collapsed, President Trump directed all federal agencies to stop using Anthropic's technology and called the company "radical woke" on Truth Social. Hegseth formally designated Anthropic a supply-chain risk to national security, a classification previously reserved for foreign adversaries, and one that requires DoD vendors and contractors to certify they do not use Anthropic's models. Hours later, Altman posted on X that OpenAI had reached its own agreement to deploy its models on the Pentagon's classified network. OpenAI's stated position is that its deal includes the same core protections Anthropic sought: no mass domestic surveillance, no autonomous weapons. The company published a blog post outlining its approach and arguing that its cloud-only deployment architecture, retained safety stack, and contractual provisions, anchored to existing US law rather than bespoke prohibitions, make its agreement more robust than any previous classified AI deployment, including Anthropic's. Kalinowski's career before OpenAI was unusual in its breadth. She spent nearly six years at Apple as a technical lead on the Mac Pro and MacBook Air programmes, including the original unibody MacBook Pro, before moving to Meta's Oculus division, where she led virtual reality hardware for more than nine years. Her final role at Meta was heading Project Nazare, later named Orion, the augmented reality glasses initiative Meta unveiled as a prototype in September 2024 and described as the most advanced AR glasses ever made. She joined OpenAI the following month. During her 16 months at OpenAI, Kalinowski built out what the company describes as its physical AI programme, including a San Francisco lab employing roughly 100 data collectors training a robotic arm on household tasks. Her departure leaves that effort without its most experienced hardware leader at a moment when OpenAI has staked considerable ambition on moving beyond software. OpenAI confirmed her resignation on Saturday and said in a statement: "We believe our agreement with the Pentagon creates a workable path for responsible national security uses of AI while making clear our red lines: no domestic surveillance and no autonomous weapons. We recognise that people have strong views about these issues and we will continue to engage in discussion with employees, government, civil society, and communities around the world." The fallout from OpenAI's Pentagon deal has not been limited to internal dissent. ChatGPT uninstalls reportedly surged 295% following the announcement, and Anthropic's Claude climbed to the number-one position in the US App Store, displacing ChatGPT. As of Saturday afternoon, the two apps remained first and second, respectively.
[6]
OpenAI robotics leader resigns over concerns about surveillance and autonomous weapons amid Pentagon contract | Fortune
Caitlin Kalinowski, who had been leading hardware and robotic engineering teams at OpenAI since November 2024, announced she has left the company. "I resigned from OpenAI," she posted on X and LinkedIn. "I care deeply about the Robotics team and the work we built together. This wasn't an easy call. AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got. This was about principle, not people. I have deep respect for Sam and the team, and I'm proud of what we built together." Her departure comes amid an escalating dispute over how far AI companies should go in supporting U.S. military uses of the technology. In recent days, negotiations between the Pentagon and Anthropic collapsed after the company pushed for strict limits on domestic surveillance and autonomous weapons. Soon after, OpenAI reached its own agreement with the Defense Department to deploy its models on a classified government network. The move drew criticism from some employees and observers who argued that OpenAI appeared to step in after Anthropic refused the terms. CEO Sam Altman later acknowledged the deal's rollout looked "opportunistic," and the company has since moved to clarify restrictions on how its systems can be used by the military. An OpenAI spokesperson confirmed the departure and provided a statement: "We believe our agreement with the Pentagon creates a workable path for responsible national security uses of AI while making clear our red lines: no domestic surveillance and no autonomous weapons. We recognize that people have strong views about these issues and we will continue to engage in discussion with employees, government, civil society and communities around the world." Before OpenAI hired Kalinowski, she served as a hardware executive at Meta for nearly two and a half years leading the company's creation of Orion, previously codenamed Project Nazare, which it billed as "the most advanced pair of AR glasses ever made." Meta unveiled its prototype glasses in September. Before leading the Orion project, she worked for more than nine years on virtual reality headsets at Meta-owned Oculus, and before that, nearly six years at Apple helping to design MacBooks, including Pro and Air models.
[7]
OpenAI's Robotics Division Loses Key Leader Caitlin Kalinowski Over Disagreement on Military Deployment Terms
On Saturday, Caitlin Kalinowski said she resigned from OpenAI, arguing that potential uses of AI for warrantless monitoring of Americans and weapon systems operating without a human decision demanded more careful debate than they received. Her exit lands as OpenAI expands into classified Pentagon projects under an arrangement that kept two stated limits in place: no domestic mass surveillance and a requirement for human control over any use of force. In a post on X, Kalinowski said she still believes AI can matter for national security, but drew hard boundaries around domestic spying without court oversight and lethal autonomy without human authorization. She also framed the resignation as a values call rather than a personal dispute, while saying she respects Sam Altman and remains proud of what her robotics group built. Caitlin Kalinowskis Bold Departure Sparks Debate Kalinowski's concerns echo the same two fault lines now shaping how top AI labs negotiate with the U.S. national security apparatus: surveillance at home and autonomy in the use of force. In her post, she said those issues were not weighed with the level of deliberation she expected. At the same time, Altman has described OpenAI's posture changing from avoiding classified engagements to taking them on with the Department of War, calling the shift urgent and more complex than earlier work. He also said OpenAI had previously passed on classified opportunities that rival lab Anthropic accepted. OpenAI's Pentagon arrangement, as described alongside Altman's comments, kept two guardrails intact while adding operational measures such as putting OpenAI engineers on-site to watch model behavior and safety. Altman also said the company would build technical constraints meant to keep systems operating within expected limits, and that the Department of War wanted those protections as well. AI Safety Negotiations Amid Pentagon Deal This unfolding situation highlights the contrasting approaches of the two companies, as Anthropic's refusal to adjust its terms led to a "supply chain risk" designation from Defense Secretary Pete Hegseth, while OpenAI successfully negotiated terms that align with existing U.S. law and policy. This juxtaposition raises questions about the ethical implications of AI in national security, particularly regarding surveillance and autonomous weaponry. The Pentagon Deal: A Critical Turning Point When Anthropic refused to change its position, Defense Secretary Pete Hegseth labeled the company a supply chain risk, and President Donald Trump directed agencies and military contractors to cut ties. Anthropic said on Friday it was "deeply saddened," called the designation "legally unsound," and warned it would "set a dangerous precedent for any American company that negotiates with the government." Altman also said OpenAI negotiated so comparable terms could be available to other AI developers, not only his firm. Even so, the split outcome remains stark: OpenAI says it got acceptance of the two guardrails, while Anthropic ended up blacklisted despite describing similar red lines. Altman said the Department of War viewed the principles as consistent with existing U.S. law and policy, and he cast OpenAI's quick move as an attempt to avoid what he viewed as a dangerous competitive trajectory among AI labs. Kalinowski's resignation, by contrast, spotlights how internal talent may react when the same boundaries are perceived as insufficiently examined. This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
Share
Share
Copy Link
Caitlin Kalinowski, OpenAI's head of robotics and consumer hardware, resigned after just 16 months, citing concerns that the company's Pentagon deal was rushed without proper deliberation on critical issues like lethal autonomy and surveillance. The departure marks the most high-profile fallout from OpenAI's controversial agreement with the Department of Defense, which came days after rival Anthropic was designated a supply-chain risk for refusing similar terms.
Caitlin Kalinowski announced her resignation from OpenAI on Saturday, stepping down from her role leading the company's robotics and consumer hardware division in direct response to the agreement with the Department of Defense
1
. The hardware executive, who joined OpenAI in November 2024 after leading Meta's augmented reality glasses team, emphasized in a social media post that surveillance of Americans without judicial oversight and lethal autonomy without human authorization are "lines that deserved more deliberation than they got"2
.
Source: The Next Web
Kalinowski's departure represents the most significant internal fallout from OpenAI's controversial Pentagon deal, which was announced just over a week ago
3
. Her position as the leader of the company's most Terminator-like technology adds particular weight to her concerns about autonomous weapons and AI safety protocols.In follow-up posts on X, Kalinowski clarified that her issue centered on process rather than personalities. "To be clear, my issue is that the announcement was rushed without the guardrails defined," she wrote, describing it as "a governance concern first and foremost"
1
. She stressed that "these are too important for deals or announcements to be rushed" while maintaining "deep respect" for CEO Sam Altman and the OpenAI team5
.The resignation carries particular significance given Kalinowski's role building OpenAI's physical AI programme over 16 months, including a San Francisco lab employing roughly 100 data collectors training robotic arms on household tasks
5
. Her departure leaves the effort without its most experienced hardware leader at a critical moment when OpenAI has staked considerable ambition on moving beyond software into robotics4
.An OpenAI spokesperson confirmed Kalinowski's departure to multiple outlets, stating that "we believe our agreement with the Pentagon creates a workable path for responsible national security uses of AI while making clear our red lines: no domestic surveillance and no autonomous weapons"
1
. The company acknowledged that people have strong views on these issues and committed to continued engagement with employees, government, civil society and communities worldwide2
.
Source: Engadget
OpenAI has described its approach as taking "a more expansive, multi-layered approach" that relies not just on contract language but also technical safeguards and AI guardrails to protect red lines similar to those Anthropic sought
1
. The company's position is that its cloud-only deployment architecture, retained safety stack, and contractual provisions anchored to existing US law make its agreement more robust than previous classified AI deployments5
.Related Stories
The OpenAI Pentagon deal emerged after negotiations between the Department of Defense and Anthropic collapsed when the AI company attempted to negotiate safeguards preventing its technology from being used in mass domestic surveillance or fully autonomous weapons
1
. The Pentagon subsequently designated Anthropic a supply-chain risk, a classification previously reserved for foreign adversaries5
. Anthropic said it will fight the designation in court, while Microsoft, Google, and Amazon confirmed they will continue making Claude available to non-defense customers1
.The controversy has damaged OpenAI's reputation among consumers, with ChatGPT uninstalls surging 295% and Claude climbing to the top of the App Store charts
1
. As of Saturday afternoon, Claude and ChatGPT remained the U.S. App Store's number one and number two free apps, respectively, suggesting sustained user concern over the Pentagon partnership5
. The market response indicates that OpenAI faces both internal and external pressure to demonstrate that its approach to national security AI deployment maintains adequate safeguards against misuse.
Source: Gizmodo
Summarized by
Navi
[1]
[5]
25 Feb 2026β’Policy and Regulation

05 Dec 2024β’Technology

10 Feb 2026β’Technology

1
Technology

2
Policy and Regulation

3
Policy and Regulation
