2 Sources
2 Sources
[1]
Critical LangChain Core Vulnerability Exposes Secrets via Serialization Injection
A critical security flaw has been disclosed in LangChain Core that could be exploited by an attacker to steal sensitive secrets and even influence large language model (LLM) responses through prompt injection. LangChain Core (i.e., langchain-core) is a core Python package that's part of the LangChain ecosystem, providing the core interfaces and model-agnostic abstractions for building applications powered by LLMs. The vulnerability, tracked as CVE-2025-68664, carries a CVSS score of 9.3 out of 10.0. Security researcher Yarden Porat has been credited with reporting the vulnerability on December 4, 2025. It has been codenamed LangGrinch. "A serialization injection vulnerability exists in LangChain's dumps() and dumpd() functions," the project maintainers said in an advisory. "The functions do not escape dictionaries with 'lc' keys when serializing free-form dictionaries." "The 'lc' key is used internally by LangChain to mark serialized objects. When user-controlled data contains this key structure, it is treated as a legitimate LangChain object during deserialization rather than plain user data." According to Cyata researcher Porat, the crux of the problem has to do with the two functions failing to escape user-controlled dictionaries containing "lc" keys. The "lc" marker represents LangChain objects in the framework's internal serialization format. "So once an attacker is able to make a LangChain orchestration loop serialize and later deserialize content including an 'lc' key, they would instantiate an unsafe arbitrary object, potentially triggering many attacker-friendly paths," Porat said. This could have various outcomes, including secret extraction from environment variables when deserialization is performed with "secrets_from_env=True" (previously set by default), instantiating classes within pre-approved trusted namespaces, such as langchain_core, langchain, and langchain_community, and potentially even leading to arbitrary code execution via Jinja2 templates. What's more, the escaping bug enables the injection of LangChain object structures through user-controlled fields like metadata, additional_kwargs, or response_metadata via prompt injection. The patch released by LangChain introduces new restrictive defaults in load() and loads() by means of an allowlist parameter "allowed_objects" that allows users to specify which classes can be serialized/deserialized. In addition, Jinja2 templates are blocked by default, and the "secrets_from_env" option is now set to "False" to disable automatic secret loading from the environment. The following versions of langchain-core are affected by CVE-2025-68664 - It's worth noting that there exists a similar serialization injection flaw in LangChain.js that also stems from not properly escaping objects with "lc" keys, thereby enabling secret extraction and prompt injection. This vulnerability has been assigned the CVE identifier CVE-2025-68665 (CVSS score: 8.6). It impacts the following npm packages - In light of the criticality of the vulnerability, users are advised to update to a patched version as soon as possible for optimal protection. "The most common attack vector is through LLM response fields like additional_kwargs or response_metadata, which can be controlled via prompt injection and then serialized/deserialized in streaming operations," Porat said. "This is exactly the kind of 'AI meets classic security' intersection where organizations get caught off guard. LLM output is an untrusted input."
[2]
Critical 'LangGrinch' vulnerability in langchain-core puts AI agent secrets at risk - SiliconANGLE
Critical 'LangGrinch' vulnerability in langchain-core puts AI agent secrets at risk A new report out today from artificial intelligence security startup Cyata Security Ltd. has detailed a recently uncovered critical vulnerability on langchain-core, the foundational library behind LangChain-based agents used widely in artificial intelligence production environments. The vulnerability, tracked as CVE-2025-68664 and dubbed "LangGrinch" has a Common Vulnerability Scoring System score of 9.3. The vulnerability can allow attackers to exfiltrate sensitive secrets from affected systems and could potentially escalate to remote code execution under certain conditions. Langchain-core sits at the heart of the agentic AI ecosystem and acts as a core dependency for countless frameworks and applications. According to Cydata, public package download trackers show langchain-core at approximately 847 million total downloads, with tens of millions of downloads in the last 30 days. The broader LangChain package has approximately 98 million downloads per month, highlighting how deeply embedded the vulnerable component is across modern AI workflows. LangGrinch occurs due to a serialization and deserialization injection vulnerability in langchain-core's built-in helper functions. An attacker can exploit the vulnerability by steering an AI agent through prompt injection into generating crafted structured outputs that include LangChain's internal marker key. Because the marker key is not properly escaped during serialization, the data can later be deserialized and interpreted as a trusted LangChain object rather than untrusted user input. "What makes this finding interesting is that the vulnerability lives in the serialization path, not the deserialization path. In agent frameworks, structured data produced downstream of a prompt is often persisted, streamed and reconstructed later," explains Yarden Porat, security researcher at Cyata. "That creates a surprisingly large attack surface reachable from a single prompt." Once the vulnerability is triggered, it can result in full environment variable exfiltration via outbound HTTP requests. The exposure may include cloud provider credentials, database and RAG connection strings, vector database secrets and large language model application programming interface keys. Cyata's researchers were able to identify 12 distinct reachable exploit flows, highlighting how routine agent operations such as persisting, streaming and reconstructing structured data can unintentionally open an attack path. Notably, the vulnerability exists in langchain-core itself and does not depend on third-party tools, integrations, or connectors. Cyata emphasized that this makes the flaw particularly dangerous, as it lives in what the company described as the ecosystem's "plumbing layer," exercised continuously by many production systems. Patches are now available in langchain-core versions 1.2.5 and 0.3.81 and Cyata is urging organizations to update immediately. Before going public with the details, Cyata ethically disclosed the details to the LangChain Maintainers, whom Cyata commended for decisive remediation and security hardening steps beyond the immediate fix. "As agents move into production, the security question shifts from 'what code do we run' to 'what effective permissions does this system end up exercising'" said Shahar Tal, chief executive officer and co-founder at Cyata. "With agentic identities, you want tight defaults, clear boundaries and the ability to reduce blast radius when something goes wrong."
Share
Share
Copy Link
A critical security vulnerability dubbed LangGrinch has been discovered in LangChain Core, scoring 9.3 on the CVSS scale. The serialization injection flaw could allow attackers to steal sensitive secrets including cloud credentials and API keys, and even manipulate AI responses through prompt injection. With langchain-core recording 847 million total downloads, the vulnerability affects a massive portion of the AI ecosystem.
A critical security vulnerability has emerged in LangChain Core that puts AI agent secrets at risk across production environments. Tracked as CVE-2025-68664 and dubbed LangGrinch, the flaw carries a CVSS score of 9.3 out of 10.0, indicating severe potential impact
1
. Security researcher Yarden Porat from Cyata Security discovered and reported the LangChain vulnerability on December 4, 2025, revealing how attackers could exfiltrate sensitive secrets and manipulate large language model responses2
.
Source: SiliconANGLE
The vulnerability affects langchain-core, the foundational Python package providing core interfaces and model-agnostic abstractions for building LLM-powered applications. Public package download trackers show langchain-core at approximately 847 million total downloads, with tens of millions of downloads in the last 30 days alone
2
. The broader LangChain package receives approximately 98 million downloads per month, highlighting how deeply embedded this vulnerable component is across modern AI workflows.
Source: Hacker News
The LangChain Core vulnerability stems from a serialization injection flaw in the dumps() and dumpd() functions. These functions fail to properly escape user-controlled dictionaries containing "lc" keys, which LangChain uses internally to mark serialized objects. The improper handling of 'lc' keys creates a dangerous attack vector where user-controlled data containing this key structure gets treated as legitimate LangChain objects during deserialization rather than plain user data.
"What makes this finding interesting is that the vulnerability lives in the serialization path, not the deserialization path," explains Yarden Porat. "In agent frameworks, structured data produced downstream of a prompt is often persisted, streamed and reconstructed later. That creates a surprisingly large attack surface reachable from a single prompt"
2
.Once exploited, attackers can instantiate unsafe arbitrary objects, potentially triggering multiple attack paths. The most severe outcome involves secret extraction from environment variables when deserialization is performed with "secrets_from_env=True," which was previously set by default. This exposure may include cloud provider credentials, database connection strings, vector database secrets, and LLM API keys.
The serialization injection flaw enables attackers to inject LangChain object structures through user-controlled fields like metadata, additional_kwargs, or response_metadata via prompt injection techniques. Cyata Security identified 12 distinct reachable exploit flows, demonstrating how routine agent operations such as persisting, streaming, and reconstructing structured data can unintentionally open attack paths
2
."The most common attack vector is through LLM response fields like additional_kwargs or response_metadata, which can be controlled via prompt injection and then serialized/deserialized in streaming operations," Porat said. "This is exactly the kind of 'AI meets classic security' intersection where organizations get caught off guard. LLM output is an untrusted input".
The vulnerability could potentially escalate to remote code execution via Jinja2 templates, and attackers can instantiate classes within pre-approved trusted namespaces such as langchain_core, langchain, and langchain_community.
Related Stories
LangChain maintainers have released patches in langchain-core versions 1.2.5 and 0.3.81
2
. The patch introduces new restrictive defaults in load() and loads() functions through an allowlist parameter called "allowed_objects" that lets users specify which classes can be serialized and deserialized. Additionally, Jinja2 templates are now blocked by default, and the "secrets_from_env" option is set to "False" to disable automatic secret loading from environment variables.A similar serialization injection flaw exists in LangChain.js, tracked as CVE-2025-68665 with a CVSS score of 8.6. This vulnerability also stems from not properly escaping objects with "lc" keys, enabling secret extraction and prompt injection in JavaScript-based implementations.
Cyata Security commended LangChain maintainers for decisive remediation and security hardening steps beyond the immediate fix. "As agents move into production, the security question shifts from 'what code do we run' to 'what effective permissions does this system end up exercising,'" said Shahar Tal, CEO and co-founder at Cyata Security. "With agentic identities, you want tight defaults, clear boundaries and the ability to reduce blast radius when something goes wrong"
2
. Organizations running LangChain-based AI agents should update immediately to protect against potential exploitation of this critical security vulnerability in the AI ecosystem's plumbing layer.Summarized by
Navi
18 Sept 2025•Technology

07 Dec 2025•Technology

02 Aug 2025•Technology

1
Technology

2
Policy and Regulation

3
Technology
