OpenAI, Mistral AI breached as Shai-Hulud malware compromises 172 developer packages

5 Sources

Share

A sophisticated supply-chain attack called Mini Shai-Hulud has compromised over 172 npm and PyPI packages, breaching major AI companies including OpenAI, Mistral AI, TanStack, UiPath, and Guardrails AI. The credential-stealing malware, attributed to TeamPCP threat group, hijacked legitimate publishing pipelines to deliver validly signed malicious packages that stole GitHub tokens, cloud credentials, and CI/CD secrets from developer environments.

Major AI Companies Fall Victim to Sophisticated Supply-Chain Attack

A massive supply-chain attack has struck the heart of AI development infrastructure, compromising over 172 packages across npm and PyPI ecosystems and breaching systems at OpenAI, Mistral AI, TanStack, and other major technology companies. The Mini Shai-Hulud worm, attributed to the TeamPCP threat group, represents an alarming escalation in software supply-chain threats by producing the first documented npm malware with valid SLSA Build Level 3 provenance attestations

2

. Between May 11 and May 12, attackers published 84 malicious versions across 42 TanStack packages, with the campaign rapidly expanding to 403 malicious versions spanning multiple ecosystems

4

. The compromised npm packages appeared cryptographically authentic to developers, carrying legitimate signatures that made detection nearly impossible through conventional verification methods.

Source: VentureBeat

Source: VentureBeat

TanStack Compromise Exposes Critical CI/CD Vulnerabilities

The TanStack compromise began when attackers exploited a chain of three vulnerabilities to hijack the project's legitimate release pipeline. According to TanStack's post-mortem analysis, the attack started with a malicious fork that triggered a pull_request_target workflow, poisoned the GitHub Actions cache, and extracted OIDC tokens directly from runner process memory

3

. This allowed attackers to publish packages through the project's own GitHub Actions release pipeline using hijacked OIDC tokens, with each malicious package carrying valid provenance attestations tied to the legitimate Release workflow

2

. The @tanstack/react-router package alone receives 12.7 million weekly downloads, with the total affected packages reaching 518 million cumulative downloads

4

. Security researchers at Endor Labs highlighted that an orphaned commit trick enabled attackers to push malicious code to a fork that remained accessible through GitHub's shared fork object storage, even though it didn't belong to any branch

3

.

Credential-Stealing Malware Targets AI Developer Ecosystems

The credential stealing malware deployed in this supply-chain attack demonstrates unprecedented sophistication in targeting AI developer ecosystems. The malicious code harvests credentials from over 100 file paths, including AWS keys, SSH private keys, npm tokens, GitHub personal access tokens, HashiCorp Vault tokens, Kubernetes service accounts, Docker configurations, and cryptocurrency wallets

4

. For the first time in a TeamPCP campaign, the malware targets password managers including 1Password and Bitwarden, while also stealing Claude and Kiro AI agent configurations, including MCP server authentication tokens

4

. The payload includes an obfuscated JavaScript file that profiles the execution environment before launching comprehensive credential theft operations

2

. Stolen data is exfiltrated to the filev2.getsession[.]org domain using Session Protocol infrastructure, a deliberate choice that helps evade detection since the domain belongs to a decentralized, privacy-focused messaging service unlikely to be blocked in enterprise environments

2

.

Mistral AI and PyPI Packages Compromised in Parallel Campaign

The Shai-Hulud malware campaign extended beyond npm to compromise PyPI packages, with Mistral AI confirming that attackers compromised version 2.4.6 of the mistralai package. Microsoft Threat Intelligence reported that the compromised package contained malicious code inserted into mistralai/client/init.py that silently downloaded a file from a remote IP address to /tmp/transformers.pyz and executed it automatically on import

1

. The filename was deliberately chosen to resemble Hugging Face's widely used Transformers AI framework, allowing the malware to blend into machine learning environments and evade suspicion

1

. The payload contained country-aware logic designed to avoid Russian-language environments and included a geofenced destructive branch with a 1-in-6 chance of executing rm -rf / when the system appeared to be in Israel or Iran

2

. Additional compromised PyPI packages included [email protected], which executes malicious code on import without any integrity verification

2

.

Source: Hacker News

Source: Hacker News

OpenAI Confirms Breach and Rotates Code-Signing Certificates

OpenAI confirmed that hackers tied to the Shai-Hulud malware campaign breached parts of its internal development environment through compromised TanStack npm packages. In a blog post, OpenAI stated that malware infected two employee devices and granted attackers access to a small number of internal code storage systems before the company stopped the activity

5

. The company observed activity consistent with the malware's publicly described behavior, including unauthorized access and credential-focused exfiltration activity in a limited subset of internal source code repositories

5

. The impacted repositories included code-signing certificates used for products on macOS, Windows, and iOS, prompting OpenAI to rotate certificates as a precautionary measure

5

. OpenAI said macOS users must update their applications before June 12, as older versions signed with previous certificates may stop functioning after that date

5

.

Self-Propagating Worm Establishes Persistent Infrastructure Hooks

What makes this Mini Shai-Hulud worm particularly dangerous is its ability to self-propagate and establish persistent hooks that survive package removal. The malware installs persistence in Claude Code hooks and VS Code auto-run tasks, writing itself into .claude/settings.json and .vscode/tasks.json with runOn: folderOpen parameters that re-execute on every project launch

2

. These persistence mechanisms live in the project tree rather than node_modules, meaning uninstalling the malicious package does not remove the threat

3

. The worm also installs a gh-token-monitor service and injects two malicious GitHub Actions workflows to serialize repository secrets into JSON objects and upload data to external servers

2

. The self-propagation mechanism locates publishable npm tokens with bypass_2fa set to true, enumerates every package published by the same maintainer, and exchanges GitHub OIDC tokens for per-package publish tokens to sidestep traditional authentication entirely

2

.

Source: BleepingComputer

Source: BleepingComputer

Broader Implications for Developer Infrastructure Security

This attack exposes critical weaknesses in how developer infrastructure handles provenance verification and trust boundaries. Microsoft advised organizations to isolate affected Linux hosts, block outbound connections to malicious IP addresses, hunt for indicators including /tmp/transformers.pyz and pgmonitor.py, and rotate any potentially exposed CI/CD credentials immediately

1

. Security researchers emphasize that since the attack produces valid SLSA Build Level 3 attestations for malicious packages, organizations need to verify provenance and add behavioral analysis layers at install time, along with signature-based checks

3

. The incident has been assigned CVE-2026-45321 with a CVSS score of 9.6 out of 10.0, indicating critical severity

2

. OpenAI noted that the attacks reflect a broader shift in the threat landscape where attackers increasingly target shared software dependencies and development tooling rather than any single company

5

. Organizations using affected packages should assume credentials were exposed and implement lockfile-only installs to prevent automatic package updates that could introduce compromised versions.

Today's Top Stories

TheOutpost.ai

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Instagram logo
LinkedIn logo
Youtube logo
© 2026 TheOutpost.AI All rights reserved