Critical AI/ML Vulnerabilities Discovered in Python Libraries From Nvidia, Salesforce, and Apple

2 Sources

Share

Security researchers uncovered vulnerabilities in popular Python libraries used across AI and ML models with millions of downloads. The flaws allow attackers to hide malicious code in model metadata that executes automatically when loaded. Nvidia NeMo, Salesforce Uni2TS, and Apple FlexTok were all affected, with patches now released following responsible disclosure.

Security Flaws in AI Models Expose Millions to Attack

Palo Alto Networks' Unit 42 has identified critical AI/ML vulnerabilities in three widely-used Python libraries that power machine learning models downloaded tens of millions of times from Hugging Face

1

. The security flaws in Nvidia NeMo, Salesforce Uni2TS, and Apple FlexTok allow attackers to embed malicious code within model metadata, which then executes automatically when files are loaded

2

. While no in-the-wild exploitation has been detected as of December 2025, the attack surface remains substantial given the libraries' widespread adoption across the AI research community.

Source: TechRadar

Source: TechRadar

How Poisoned Model Metadata Enables Remote Code Execution

The vulnerabilities stem from how these Python libraries interact with Hydra, a configuration management tool maintained by Meta that's commonly deployed in machine learning projects

1

. Specifically, the issue centers on the Hydra instantiate() function, which the affected libraries use to load configurations from model metadata without proper sanitization. Curtis Carmony, a malware research engineer at Unit 42, explained that the function doesn't just accept class names to instantiate—it also takes any callable and passes provided arguments, enabling remote code execution through built-in Python functions like eval() and os.system()

1

.

Attack Vector Targets Developer Trust in Open Models

The threat landscape is particularly concerning because developers routinely create variations of state-of-the-art models with different fine-tunings and quantizations, often from researchers unaffiliated with reputable institutions

1

. Attackers need only create a modification of an existing popular model with a real or claimed benefit, then inject malicious metadata. Hugging Face doesn't make metadata contents as easily accessible as other files, nor does it flag files using safetensors or NeMo formats as potentially unsafe

1

. With more than 100 different Python libraries used across Hugging Face models—nearly 50 of which rely on Hydra—the attack surface extends far beyond these three libraries

1

.

Patches Issued Following Responsible Disclosure

All three companies were notified in April 2025 and released fixes by July 2025

2

. Nvidia issued CVE-2025-23304 with a high severity rating of 7.8/10 and patched the flaw in NeMo version 2.3.2, which addresses arbitrary code execution risks in .nemo and .qnemo files

1

. Salesforce assigned CVE-2026-22584 a critical rating of 9.8/10 and remediated the issue in July 2025, with a spokesperson confirming no evidence of unauthorized access to customer data

1

2

. Apple FlexTok updated its code in June 2025

2

. Meta has updated Hydra's documentation with warnings about RCE risks and recommends implementing a block-list mechanism, though this hasn't been released in an official Hydra version yet

1

.

Implications for AI Security Practices

These security flaws in AI models highlight a growing concern: while formats like safetensors were designed to prevent arbitrary code execution during loading, the code that consumes them creates vulnerabilities

1

. Organizations deploying AI models should verify they're using patched versions of affected libraries and scrutinize model sources more carefully. The incident underscores the need for enhanced metadata validation and clearer safety indicators on model-sharing platforms, particularly as AI adoption accelerates across industries relying on open-source frameworks.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo