2 Sources
2 Sources
[1]
OpenAI flags software supply chain scare
Why it matters: The incident could have allowed hackers to exfiltrate a certificate that could make phony OpenAI apps look legitimate -- although OpenAI says it hasn't seen this happen. * Google has also linked the broader hacking campaign to a North Korean hacker group. Zoom in: OpenAI said in a blog post Friday night that a GitHub workflow that the company uses to sign certificates for MacOS applications downloaded a malicious update from the Axios software on March 31. * On the same day, hackers who hijacked a developer's account published two infected updates to the Axios library before anyone noticed. * Axios, a widely used JavaScript library for making HTTP requests, is not affiliated with Axios Media. * MacOS application users -- including those for ChatGPT, Atlas, and Codex -- could have been affected, the company said. Threat level: Having access to that system could have allowed hackers to create their own phony OpenAI applications that have the back-end, legitimate certificate needed to trick devices and the App Store into thinking it's real. Yes, but: OpenAI says there's no evidence that any user data, intellectual property or internal systems were compromised. * OpenAI hasn't detected any signs that iOS, Android, Windows or other platforms' apps have been affected. State of play: AI companies are now prime targets for classic software supply chain attacks -- not just novel AI-specific threats. What's next: OpenAI will stop supporting older versions of its MacOS apps on May 8, out of an abundance of caution. * The company says users have a 30-day window to update before the revoked certificate could block new downloads and first-time launches.
[2]
OpenAI Reveals Security Breach, Tightens macOS App Verification Protocols
On Friday, the OpenAI said it uncovered a security problem tied to Axios, a third-party developer library, and moved to tighten the way its macOS apps are verified so impostor software can't masquerade as official releases. Reuters reported that OpenAI said it did not find signs that customer information was accessed, that its internal environment or intellectual property was breached, or that its codebase was modified. In the San Francisco case, police said officers were called around 4:12 a.m. to a report of an incendiary device thrown at a residence, and the suspect ran off before being detained about an hour later after another call about a person threatening to ignite a separate building. What OpenAI's Security Breach Reveals As per the report, the OpenAI is updating its security credentials and requiring Mac users to upgrade to the latest applications releases. The company also set a deadline: starting May 8, older builds of its macOS desktop software are slated to lose updates and support, and could stop working. That software-hardening push comes as OpenAI has been navigating criticism tied to a reported deal involving U.S. government use of its tools in classified military settings. Altman, writing in a blog post after the firebomb allegation, said, "A lot of the criticism of our industry comes from sincere concern about the incredibly high stakes of this technology." How A Supply-Chain Attack Unfolded OpenAI said Axios was tampered with on March 31 as part of a wider software supply-chain campaign that the company believes traces back to North Korea-linked actors. The company said the compromise caused a GitHub Actions workflow to pull and run a malicious Axios version, and that workflow could reach certificate and notarization materials used to sign macOS apps. The outlet reports that OpenAI's internal probe found the workflow's signing certificate most likely remained intact despite the malicious attack. OpenAI also said passwords and OpenAI API keys were not impacted. In the San Francisco arrest, authorities said evidence ties the suspect to both the alleged Molotov incident and the later threats, and police reported no injuries. Cybersecurity Enhancements Fuel Revenue Aspirations Additionally, OpenAI is reportedly finalizing a model with enhanced cybersecurity features through its "Trusted Access for Cyber" program, which it plans to deploy to a select group of companies, reflecting its commitment to addressing security concerns in tandem with its growth trajectory. This emphasis on security is particularly relevant given the recent incidents surrounding the company. Why Timely Response Is Crucial For Tech Firms OpenAI confirmed it is cooperating with law enforcement in the Altman incident, and a spokesperson told Reuters, "Thankfully, no one was hurt. We deeply appreciate how quickly SFPD responded and the support from the city in helping keep our employees safe," while adding the company is assisting investigators. Altman also urged a lower temperature in the debate around artificial intelligence, writing, "While we have that debate, we should de-escalate the rhetoric and tactics and try to have fewer explosions in fewer homes, figuratively and literally," On the product side, OpenAI's macOS update requirement effectively turns patching into a gatekeeper for app legitimacy, aiming to reduce the odds that a forged build can circulate with credible-looking signing. The company framed the move as a preventative step tied to how its macOS apps are certified, rather than a response to confirmed user-data theft. This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
Share
Share
Copy Link
OpenAI revealed a security breach on March 31 when hackers compromised the Axios JavaScript library, potentially exposing signing certificates for MacOS applications including ChatGPT. Google linked the attack to North Korean hackers. While no user data was compromised, OpenAI will mandate updates for Mac users and stop supporting older app versions by May 8.
OpenAI disclosed a significant security breach on Friday that exposed its MacOS applications to a supply chain attack through a compromised third-party library. The incident unfolded on March 31 when a GitHub workflow used by the company to sign certificates for MacOS applications downloaded a malicious update from Axios, a widely used JavaScript library for making HTTP requests
1
. On the same day, hackers who had hijacked a developer's account published two infected updates to the Axios developer library before detection1
. The Axios library involved in this breach is not affiliated with Axios Media.
Source: Benzinga
The vulnerability could have allowed hackers to exfiltrate signing certificates for MacOS that would enable them to create phony OpenAI applications appearing legitimate to both devices and the App Store. OpenAI MacOS applications including ChatGPT, Atlas, and Codex were potentially affected by the compromise
1
. The GitHub workflow in question had access to certificate and notarization materials used to authenticate macOS apps, creating a serious cybersecurity risk2
. However, OpenAI's internal investigation found that the workflow's signing certificate most likely remained intact despite the malicious attack2
.OpenAI emphasized there is no evidence of user data compromise, intellectual property theft, or internal systems penetration. The company confirmed that passwords and OpenAI API keys were not impacted by the breach
2
. OpenAI hasn't detected any signs that iOS, Android, Windows, or other platforms' apps have been affected, limiting the scope to MacOS environments1
. Despite this reassurance, the incident highlights how AI companies have become prime targets for classic software supply chain attacks, not just novel AI-specific threats1
.
Source: Axios
Related Stories
Google has linked the broader hacking campaign to a North Korean hacker group, suggesting state-sponsored actors are actively targeting major technology firms
1
. OpenAI confirmed the compromise was part of a wider software supply-chain campaign that traces back to North Korea-linked actors2
. This attribution raises concerns about the sophistication and persistence of threats facing companies developing cutting-edge artificial intelligence technologies.As a precautionary measure, OpenAI will stop supporting older versions of its MacOS apps on May 8, giving users a 30-day window to update before the revoked certificate could block new downloads and first-time launches
1
. The company is updating its security credentials and requiring Mac users to upgrade to the latest application releases, effectively turning patching into a gatekeeper for app legitimacy2
. Additionally, OpenAI is finalizing a model with enhanced cybersecurity features through its "Trusted Access for Cyber" program, which it plans to deploy to a select group of companies2
. This incident underscores the critical importance of timely response and proactive security measures as OpenAI navigates both technical vulnerabilities and growing scrutiny around its partnerships and deployment of AI tools in sensitive environments.Summarized by
Navi
27 Nov 2025•Technology

08 Jul 2025•Technology

11 Dec 2025•Policy and Regulation

1
Technology

2
Technology

3
Science and Research
