3 Sources
[1]
OpenAI flags software supply chain scare
Why it matters: The incident could have allowed hackers to exfiltrate a certificate that could make phony OpenAI apps look legitimate -- although OpenAI says it hasn't seen this happen. * Google has also linked the broader hacking campaign to a North Korean hacker group. Zoom in: OpenAI said in a blog post Friday night that a GitHub workflow that the company uses to sign certificates for MacOS applications downloaded a malicious update from the Axios software on March 31. * On the same day, hackers who hijacked a developer's account published two infected updates to the Axios library before anyone noticed. * Axios, a widely used JavaScript library for making HTTP requests, is not affiliated with Axios Media. * MacOS application users -- including those for ChatGPT, Atlas, and Codex -- could have been affected, the company said. Threat level: Having access to that system could have allowed hackers to create their own phony OpenAI applications that have the back-end, legitimate certificate needed to trick devices and the App Store into thinking it's real. Yes, but: OpenAI says there's no evidence that any user data, intellectual property or internal systems were compromised. * OpenAI hasn't detected any signs that iOS, Android, Windows or other platforms' apps have been affected. State of play: AI companies are now prime targets for classic software supply chain attacks -- not just novel AI-specific threats. What's next: OpenAI will stop supporting older versions of its MacOS apps on May 8, out of an abundance of caution. * The company says users have a 30-day window to update before the revoked certificate could block new downloads and first-time launches.
[2]
OpenAI apps for MacOS exposed by threat
A wider ranging security incident reported by Google Threat Intelligence Group last week prompted OpenAI to take action around its certification process. Open AI said on Friday (10 April) that it would be working on safeguarding and updating the certification process for its apps running on MacOS following reports of a security issue around a third-party development tool. The company said that it would update the security certification process for its MacOS apps through "an abundance of caution", having found no evidence that OpenAI user data was accessed, that its systems or intellectual property were compromised, or that its software was altered. A wider ranging security incident reported by Google Threat Intelligence Group last week centred around exploits of a third-party tool named Axios, which prompted OpenAI to consider and take steps against the possibility "of someone attempting to distribute a fake app that appears to be from OpenAI", the company said. According to the company, this "unlikely" scenario necessitated it to revoke and replace existing security certifications for MacOS versions of its chatbot ChatGPT, coding tool Codex and web browser Atlas. OpenAI said that Mac users of any of these apps are required to update to their newest versions to ensure compliance with the new security protocols, adding that "older versions of our MacOS desktop apps will no longer receive updates or support, and may not be functional". User passwords and OpenAI keys were unaffected by the potential breach, and no evidence of "malware signed as OpenAI" had been detected, the company said. It added that after 8 May, new downloads and launches of apps signed with old security certificates will be blocked by MacOS security protections. The potential security threat does not affect iOS, Android, Linux, Windows or web versions of OpenAI apps, the company said, and only users of its MacOS versions need to take action. The "root cause" of the security incident was a "misconfiguration in the GitHub Actions workflow" that has since been addressed, according to OpenAI. Last month, reports emerged of the AI giant's plans for consolidating its chatbot, coding and web browsing tools into a single 'superapp' for desktop in the face of fierce competition from Anthropic. The following week, it decided to shut down its controversial AI video generator Sora and sideline plans for an 'erotic' version of ChatGPT to focus instead on its core enterprise business. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[3]
OpenAI Reveals Security Breach, Tightens macOS App Verification Protocols
On Friday, the OpenAI said it uncovered a security problem tied to Axios, a third-party developer library, and moved to tighten the way its macOS apps are verified so impostor software can't masquerade as official releases. Reuters reported that OpenAI said it did not find signs that customer information was accessed, that its internal environment or intellectual property was breached, or that its codebase was modified. In the San Francisco case, police said officers were called around 4:12 a.m. to a report of an incendiary device thrown at a residence, and the suspect ran off before being detained about an hour later after another call about a person threatening to ignite a separate building. What OpenAI's Security Breach Reveals As per the report, the OpenAI is updating its security credentials and requiring Mac users to upgrade to the latest applications releases. The company also set a deadline: starting May 8, older builds of its macOS desktop software are slated to lose updates and support, and could stop working. That software-hardening push comes as OpenAI has been navigating criticism tied to a reported deal involving U.S. government use of its tools in classified military settings. Altman, writing in a blog post after the firebomb allegation, said, "A lot of the criticism of our industry comes from sincere concern about the incredibly high stakes of this technology." How A Supply-Chain Attack Unfolded OpenAI said Axios was tampered with on March 31 as part of a wider software supply-chain campaign that the company believes traces back to North Korea-linked actors. The company said the compromise caused a GitHub Actions workflow to pull and run a malicious Axios version, and that workflow could reach certificate and notarization materials used to sign macOS apps. The outlet reports that OpenAI's internal probe found the workflow's signing certificate most likely remained intact despite the malicious attack. OpenAI also said passwords and OpenAI API keys were not impacted. In the San Francisco arrest, authorities said evidence ties the suspect to both the alleged Molotov incident and the later threats, and police reported no injuries. Cybersecurity Enhancements Fuel Revenue Aspirations Additionally, OpenAI is reportedly finalizing a model with enhanced cybersecurity features through its "Trusted Access for Cyber" program, which it plans to deploy to a select group of companies, reflecting its commitment to addressing security concerns in tandem with its growth trajectory. This emphasis on security is particularly relevant given the recent incidents surrounding the company. Why Timely Response Is Crucial For Tech Firms OpenAI confirmed it is cooperating with law enforcement in the Altman incident, and a spokesperson told Reuters, "Thankfully, no one was hurt. We deeply appreciate how quickly SFPD responded and the support from the city in helping keep our employees safe," while adding the company is assisting investigators. Altman also urged a lower temperature in the debate around artificial intelligence, writing, "While we have that debate, we should de-escalate the rhetoric and tactics and try to have fewer explosions in fewer homes, figuratively and literally," On the product side, OpenAI's macOS update requirement effectively turns patching into a gatekeeper for app legitimacy, aiming to reduce the odds that a forged build can circulate with credible-looking signing. The company framed the move as a preventative step tied to how its macOS apps are certified, rather than a response to confirmed user-data theft. This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
Share
Copy Link
OpenAI disclosed a security incident on March 31 involving a compromised third-party developer library that could have enabled hackers to create fake OpenAI apps. The company is now revoking certificates and requiring Mac users to update ChatGPT, Atlas, and Codex by May 8. Google Threat Intelligence linked the broader campaign to North Korean hackers, though OpenAI found no evidence of user data compromise.
OpenAI disclosed a security incident on Friday that exposed its macOS apps to a potential software supply chain attack, prompting the company to overhaul its security certification process and mandate immediate user updates
1
. The breach occurred on March 31 when a GitHub Actions workflow used to sign certificates for macOS applications downloaded a malicious update from the Axios developer library, a widely used JavaScript library for making HTTP requests that is unaffiliated with Axios Media1
. Hackers who hijacked a developer's account published two infected updates to the Axios library before detection, creating a vulnerability that could have allowed attackers to exfiltrate certificates and create fake OpenAI apps that would appear legitimate to devices and the App Store1
.
Source: Benzinga
Google Threat Intelligence Group connected the wider hacking campaign to a North Korean hacker group, underscoring how AI companies have become prime targets for classic software supply chain attacks alongside novel AI-specific threats
1
2
. The compromise affected the company's GitHub workflow, which could reach signing certificates and notarization materials used to authenticate macOS versions of ChatGPT, Codex, and Atlas3
. Despite the severity of the vulnerability, OpenAI emphasized that there was no evidence any user data, intellectual property, or internal systems were compromised, and no signs that iOS, Android, Windows, or other platforms' apps were affected1
2
.
Source: Silicon Republic
In response to the security incident, OpenAI is implementing stringent macOS app verification protocols and revoking existing security certifications out of an abundance of caution
2
. The company will stop supporting older versions of its macOS apps on May 8, giving users a 30-day window to update before the revoked certificate could block new downloads and first-time launches1
. Mac users of ChatGPT, Codex, and Atlas are required to upgrade to the newest versions to ensure compliance with new security protocols, as older versions will no longer receive updates or support and may become non-functional2
. The root cause was identified as a misconfiguration in the GitHub Actions workflow, which has since been addressed2
.Related Stories
The incident highlights the escalating cybersecurity challenges facing AI companies as they become high-value targets for state-sponsored actors and sophisticated threat groups. OpenAI confirmed that user passwords and OpenAI API keys were unaffected by the potential breach, and no evidence of malware signed as OpenAI had been detected
2
. The company is reportedly finalizing a model with enhanced cybersecurity features through its Trusted Access for Cyber program, which it plans to deploy to a select group of companies, reflecting its commitment to addressing security concerns alongside its growth trajectory3
. This proactive approach to macOS app verification effectively turns patching into a gatekeeper for app legitimacy, aiming to reduce the odds that forged builds can circulate with credible-looking signing3
. The timing is particularly significant as OpenAI navigates broader scrutiny and competition in the AI sector, with the company framing the move as a preventative step rather than a response to confirmed user data theft3
.Summarized by
Navi
[2]
27 Nov 2025•Technology

08 Jul 2025•Technology

11 Dec 2025•Policy and Regulation

1
Entertainment and Society

2
Health

3
Technology
