Critical LangGrinch Flaw in LangChain Core Exposes AI Agent Secrets and Enables Prompt Injection

Reviewed byNidhi Govil

2 Sources

Share

A critical security vulnerability dubbed LangGrinch has been discovered in LangChain Core, scoring 9.3 on the CVSS scale. The serialization injection flaw could allow attackers to steal sensitive secrets including cloud credentials and API keys, and even manipulate AI responses through prompt injection. With langchain-core recording 847 million total downloads, the vulnerability affects a massive portion of the AI ecosystem.

Critical Security Flaw Threatens AI Production Environments

A critical security vulnerability has emerged in LangChain Core that puts AI agent secrets at risk across production environments. Tracked as CVE-2025-68664 and dubbed LangGrinch, the flaw carries a CVSS score of 9.3 out of 10.0, indicating severe potential impact

1

. Security researcher Yarden Porat from Cyata Security discovered and reported the LangChain vulnerability on December 4, 2025, revealing how attackers could exfiltrate sensitive secrets and manipulate large language model responses

2

.

Source: SiliconANGLE

Source: SiliconANGLE

The vulnerability affects langchain-core, the foundational Python package providing core interfaces and model-agnostic abstractions for building LLM-powered applications. Public package download trackers show langchain-core at approximately 847 million total downloads, with tens of millions of downloads in the last 30 days alone

2

. The broader LangChain package receives approximately 98 million downloads per month, highlighting how deeply embedded this vulnerable component is across modern AI workflows.

Source: Hacker News

Source: Hacker News

Serialization Injection Flaw Enables Secret Extraction

The LangChain Core vulnerability stems from a serialization injection flaw in the dumps() and dumpd() functions. These functions fail to properly escape user-controlled dictionaries containing "lc" keys, which LangChain uses internally to mark serialized objects. The improper handling of 'lc' keys creates a dangerous attack vector where user-controlled data containing this key structure gets treated as legitimate LangChain objects during deserialization rather than plain user data.

"What makes this finding interesting is that the vulnerability lives in the serialization path, not the deserialization path," explains Yarden Porat. "In agent frameworks, structured data produced downstream of a prompt is often persisted, streamed and reconstructed later. That creates a surprisingly large attack surface reachable from a single prompt"

2

.

Once exploited, attackers can instantiate unsafe arbitrary objects, potentially triggering multiple attack paths. The most severe outcome involves secret extraction from environment variables when deserialization is performed with "secrets_from_env=True," which was previously set by default. This exposure may include cloud provider credentials, database connection strings, vector database secrets, and LLM API keys.

Prompt Injection Amplifies Attack Surface

The serialization injection flaw enables attackers to inject LangChain object structures through user-controlled fields like metadata, additional_kwargs, or response_metadata via prompt injection techniques. Cyata Security identified 12 distinct reachable exploit flows, demonstrating how routine agent operations such as persisting, streaming, and reconstructing structured data can unintentionally open attack paths

2

.

"The most common attack vector is through LLM response fields like additional_kwargs or response_metadata, which can be controlled via prompt injection and then serialized/deserialized in streaming operations," Porat said. "This is exactly the kind of 'AI meets classic security' intersection where organizations get caught off guard. LLM output is an untrusted input".

The vulnerability could potentially escalate to remote code execution via Jinja2 templates, and attackers can instantiate classes within pre-approved trusted namespaces such as langchain_core, langchain, and langchain_community.

Patches Available, JavaScript Version Also Affected

LangChain maintainers have released patches in langchain-core versions 1.2.5 and 0.3.81

2

. The patch introduces new restrictive defaults in load() and loads() functions through an allowlist parameter called "allowed_objects" that lets users specify which classes can be serialized and deserialized. Additionally, Jinja2 templates are now blocked by default, and the "secrets_from_env" option is set to "False" to disable automatic secret loading from environment variables.

A similar serialization injection flaw exists in LangChain.js, tracked as CVE-2025-68665 with a CVSS score of 8.6. This vulnerability also stems from not properly escaping objects with "lc" keys, enabling secret extraction and prompt injection in JavaScript-based implementations.

Cyata Security commended LangChain maintainers for decisive remediation and security hardening steps beyond the immediate fix. "As agents move into production, the security question shifts from 'what code do we run' to 'what effective permissions does this system end up exercising,'" said Shahar Tal, CEO and co-founder at Cyata Security. "With agentic identities, you want tight defaults, clear boundaries and the ability to reduce blast radius when something goes wrong"

2

. Organizations running LangChain-based AI agents should update immediately to protect against potential exploitation of this critical security vulnerability in the AI ecosystem's plumbing layer.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo