3 Sources
3 Sources
[1]
AI framework flaws put enterprise clouds at risk of takeover
Two "easy-to-exploit" vulnerabilities in the popular open-source AI framework Chainlit put major enterprises' cloud environments at risk of leaking data or even full takeover, according to cyber-threat exposure startup Zafran. Chainlit is a Python package that organizations can use to build production-ready AI chatbots and applications. Corporations can either use Chainlit's built-in UI and backend, or create their own frontend on top of Chainlit's backend. It also integrates with other tools and platforms including LangChain, OpenAI, Bedrock, and LlamaIndex, and supports authentication and cloud deployment options. It's downloaded about 700,000 times every month and saw 5 million downloads last year. The two vulnerabilities are CVE-2026-22218, which allows arbitrary file read, and CVE-2026-22219, which can lead to server-side request forgery (SSRF) attacks on the servers hosting AI applications. While Zafran didn't see any indications of in-the-wild exploitation, "the internet-facing applications we observed belonged to the financial services and energy sectors, and universities are also using this framework," CTO Ben Seri told The Register. Zafran disclosed the bugs to the project's maintainers in November, and a month later, Chainlit released a patched version (2.9.4) that fixes the flaws. So if you use Chainlit, make sure to update the framework to the fixed release. The arbitrary file read flaw, CVE-2026-22218, has to do with how the framework handles elements - these are pieces of content, such as a file or image, that can be attached to a message. It can be triggered by sending a malicious update element request with a tampered custom element, and abused to exfiltrate environment variables by reading /proc/self/environ. "These variables often contain highly sensitive values that the system and enterprise depend on, including API keys, credentials, internal file paths, internal IPs, and ports," according to Zafran's analysis, shared with The Register ahead of publication. "This is mostly dangerous in AI systems where the servers have access to internal data of the company to provide a tailored chatbot experience to their users." In environments where authentication is enabled, attackers can steal secrets used to sign authentication tokens (CHAINLIT_AUTH_SECRET). These secrets, when combined with user identifiers - leaked from databases or inferred from organization emails - can be abused to forge authentication tokens and fully take over users' Chainlit accounts. Other environment variables up for grabs may include cloud credentials - such as AWS_SECRET_KEY - that Chainlit requires for cloud storage, along with sensitive API keys or the addresses and names of internal services. Plus, an attacker can probe these addresses using the second SSRF vulnerability to access sensitive data from internal REST APIs. Zafran found the SSRF vulnerability, CVE-2026-22219, in the SQLAlchemy data layer. This one is triggered in the same way as the arbitrary file read - via a tampered custom element. Then, the attacker can retrieve the copied file by extracting the element's "chainlit key" property from the metadata, download the file to an attacker-controlled computer, and query the file to access conversation history. According to Seri, the vulnerabilities are "easy to exploit," and can be combined in multiple ways to leak sensitive data, escalate privileges, and move laterally within the system. "An attacker only needs to send a simple command and change one value to point to the file or URL they want to access," he said. "Regarding how the vulnerabilities can be combined, SSRF typically requires knowledge of the server environment," Seri added. "By leveraging the read-file vulnerability to leak that information, such as environment details or internal addresses, it becomes much easier to successfully carry out the SSRF attack." Companies increasingly use AI frameworks to build their own AI chatbots and apps, and Seri acknowledges that organizations are "working under very tight timelines to deliver fully functioning AI systems that integrate with highly sensitive data." Using third-party frameworks and open-source code allows development teams to move fast - and it introduces new risks to the environment. "The risk is not the use of third-party code by itself, but the combination of rapid integration, limited understanding of the added code, and reliance on external maintainers for security and code quality," Seri said. "As a result, organizations end up deploying backend servers that communicate with clients, cloud resources, and LLMs, creating multiple entry points where vulnerabilities can emerge and put the system at risk." ®
[2]
Chainlit AI framework bugs let hackers breach cloud environments
Two high-severity vulnerabilities in Chainlit, a popular open-source framework for building conversational AI applications, allow reading any file on the server and leaking sensitive information. The issues, dubbed 'ChainLeak' and discovered by Zafran Labs researchers, can be exploited without user interaction and impact "internet-facing AI systems that are actively deployed across multiple industries, including large enterprises." The Chainlit AI app-building framework has an average of 700,000 monthly downloads on the PyPI registry and 5 million downloads per year. It provides a ready-made web UI for chat-based AI parts, backend plumbing tools, and built-in support for authentication, session handling, and cloud deployment. It is typically used in enterprise deployments and academic institutions, and is found in internet-facing production systems. The two security issues that Zafran researchers discovered are an arbitrary file read tracked as CVE-2026-22218, and a server-side request forgery (SSRF) tracked as CVE-2026-22219. CVE-2026-22218 can be exploited via the /project/element endpoint and allows attackers to submit a custom element with a controlled 'path' field, forcing Chainlit to copy the file at that path into the attacker's session without validation. This results in attackers reading any file accessible to the Chainlit server, including sensitive information such as API keys, cloud account credentials, source code, internal configuration files, SQLite databases, and authentication secrets. CVE-2026-22219 affects Chainlit deployments using the SQLAlchemy data layer, and is exploited by setting the 'url' field of a custom element, forcing the server to fetch the URL via an outbound GET request and storing the response. Attackers may then retrieve the fetched data via element download endpoints, gaining access to internal REST services and probing internal IPs and services, the researchers say. Zafran demonstrated that the two flaws can be combined into a single attack chain that enables full-system compromise and lateral movement in cloud environments. The researchers notified the Chainlit maintainers about the flaws on November 23, 2025, and received an acknowledgment on December 9, 2025. The vulnerabilities were fixed on December 24, 2025, with the release of Chainlit version 2.9.4. Due to the severity and exploitation potential of CVE-2026-22218 and CVE-2026-22219, impacted organizations are recommended to upgrade to version 2.9.4 or later (the latest is 2.9.6) as soon as possible.
[3]
Chainlit AI Framework Flaws Enable Data Theft via File Read and SSRF Bugs
Security vulnerabilities were uncovered in the popular open-source artificial intelligence (AI) framework Chainlit that could allow attackers to steal sensitive data, which may allow for lateral movement within a susceptible organization. Zafran Security said the high-severity flaws, collectively dubbed ChainLeak, could be abused to leak cloud environment API keys and steal sensitive files, or perform server-side request forgery (SSRF) attacks against servers hosting AI applications. Chainlit is a framework for creating conversational chatbots. According to statistics shared by the Python Software Foundation, the package has been downloaded over 220,000 times over the past week. It has attracted a total of 7.3 million downloads to date. Details of the two vulnerabilities are as follows - "The two Chainlit vulnerabilities can be combined in multiple ways to leak sensitive data, escalate privileges, and move laterally within the system," Zafran researchers Gal Zaban and Ido Shani said. "Once an attacker gains arbitrary file read access on the server, the AI application's security quickly begins to collapse. What initially appears to be a contained flaw becomes direct access to the system's most sensitive secrets and internal state." For instance, an attacker can weaponize CVE-2026-22218 to read "/proc/self/environ," allowing them to glean valuable information such as API keys, credentials, and internal file paths that could be used to burrow deeper into the compromised network and even gain access to the application source code. Alternatively, it can be used to leak database files if the setup uses SQLAlchemy with an SQLite backend as its data layer. Following responsible disclosure on November 23, 2025, both vulnerabilities were addressed by Chainlit in version 2.9.4 released on December 24, 2025. "As organizations rapidly adopt AI frameworks and third-party components, long-standing classes of software vulnerabilities are being embedded directly into AI infrastructure," Zafran said. "These frameworks introduce new and often poorly understood attack surfaces, where well-known vulnerability classes can directly compromise AI-powered systems." The disclosure comes as BlueRock disclosed a vulnerability in Microsoft's MarkItDown Model Context Protocol (MCP) server dubbed MCP fURI that enables arbitrary calling of URI resources, exposing organizations to privilege escalation, SSRF, and data leakage attacks. The shortcoming affects the server when running in an Amazon Web Services (AWS) EC2 instance using IDMSv1. "This vulnerability allows an attacker to execute the Markitdown MCP tool convert_to_markdown to call an arbitrary uniform resource identifier (URI)," BlueRock said. "The lack of any boundaries on the URI allows any user, agent, or attacker calling the tool to access any HTTP or file resource." "When providing a URI to the Markitdown MCP server, this can be used to query the instance metadata of the server. A user can then obtain credentials to the instance if there is a role associated, giving you access to the AWS account, including the access and secret keys." The agentic AI security company said its analysis of more than 7,000 MCP servers found that over 36.7% of them are likely exposed to similar SSRF vulnerabilities. To mitigate the risk posed by the issue, it's advised to use IMDSv2 to secure against SSRF attacks, implement private IP blocking, restrict access to metadata services, and create an allowlist to prevent data exfiltration.
Share
Share
Copy Link
Two critical security flaws in Chainlit, an open-source AI framework downloaded 700,000 times monthly, allow attackers to read arbitrary files and launch server-side request forgery attacks. The vulnerabilities, dubbed ChainLeak, put enterprise cloud environments at risk of data theft and full system compromise across financial services, energy, and academic sectors.

Two high-severity vulnerabilities in Chainlit, a widely-used open-source AI framework for building conversational AI applications, have exposed enterprise cloud environments to significant cloud security risk. The flaws, collectively dubbed ChainLeak by cybersecurity firm Zafran, enable attackers to steal sensitive data and potentially take over enterprise cloud systems through a combination of arbitrary file read and server-side request forgery attacks
1
2
.Chainlit is a Python package that allows organizations to build production-ready AI chatbots and applications with built-in UI, backend infrastructure, and integration support for platforms including LangChain, OpenAI, Bedrock, and LlamaIndex. The framework sees approximately 700,000 downloads monthly and reached 5 million downloads last year, making these AI framework vulnerabilities particularly concerning for the enterprise sector
1
. According to the Python Software Foundation, Chainlit has attracted 7.3 million downloads to date, with over 220,000 downloads in the past week alone3
.The first vulnerability, CVE-2026-22218, allows arbitrary file read access through improper handling of elements—pieces of content like files or images attached to messages. Attackers can exploit this flaw via the /project/element endpoint by sending a malicious update element request with a tampered custom element containing a controlled 'path' field, forcing Chainlit to copy files without validation
2
. This enables attackers to read any file accessible to the Chainlit server, including API keys, cloud credentials such as AWS_SECRET_KEY, source code, internal configuration files, SQLite databases, and authentication secrets1
2
.Zafran CTO Ben Seri explained that attackers can exfiltrate environment variables by reading /proc/self/environ, which often contains highly sensitive values including CHAINLIT_AUTH_SECRET used to sign authentication tokens
1
. When combined with user identifiers leaked from databases or inferred from organization emails, these secrets enable attackers to forge authentication tokens and fully take over users' Chainlit accounts, demonstrating a clear path to full system compromise1
.The second vulnerability, CVE-2026-22219, affects Chainlit deployments using the SQLAlchemy data layer and enables server-side request forgery attacks. Attackers can exploit this flaw by setting the 'url' field of a custom element, forcing the server to fetch the URL via an outbound GET request and storing the response
2
. They can then retrieve the fetched data via element download endpoints by extracting the element's "chainlit key" property from the metadata, gaining access to internal REST services and probing internal IPs and services1
2
.Zafran researchers Gal Zaban and Ido Shani noted that "the two Chainlit vulnerabilities can be combined in multiple ways to leak sensitive data, escalate privileges, and move laterally within the system"
3
. Seri emphasized that the vulnerabilities are "easy to exploit," requiring attackers to "send a simple command and change one value to point to the file or URL they want to access"1
. The arbitrary file read vulnerability provides crucial reconnaissance information that makes SSRF attacks significantly more effective, as attackers can first leak environment details and internal addresses before launching targeted requests1
.Related Stories
Zafran identified internet-facing applications using Chainlit in the financial services and energy sectors, as well as universities, though no evidence of in-the-wild exploitation has been detected
1
2
. These sectors handle particularly sensitive information, making the potential for data exfiltration especially concerning.Zafran disclosed the bugs to Chainlit maintainers on November 23, 2025, receiving acknowledgment on December 9, 2025
2
. The vulnerabilities were fixed on December 24, 2025, with the release of Chainlit version 2.9.4, and organizations are strongly recommended to upgrade to version 2.9.4 or later—the latest being 2.9.6—as soon as possible1
2
.The ChainLeak vulnerabilities highlight growing concerns about security in third-party frameworks as organizations rush to deploy AI systems. Seri acknowledged that companies are "working under very tight timelines to deliver fully functioning AI systems that integrate with highly sensitive data," and while using open-source code allows development teams to move fast, it introduces new risks
1
."The risk is not the use of third-party code by itself, but the combination of rapid integration, limited understanding of the added code, and reliance on external maintainers for security and code quality," Seri explained
1
. Organizations end up deploying backend servers that communicate with clients, cloud resources, and LLMs, creating multiple entry points where vulnerabilities can emerge. Zafran warned that "as organizations rapidly adopt AI frameworks and third-party components, long-standing classes of software vulnerabilities are being embedded directly into AI infrastructure," introducing poorly understood attack surfaces where well-known vulnerability classes can directly compromise AI-powered systems3
.This incident serves as a reminder for organizations to maintain vigilant oversight of their AI infrastructure dependencies, implement regular security audits of third-party frameworks, and establish rapid patching procedures. As AI systems increasingly handle sensitive enterprise data and integrate deeply with cloud environments, the attack surface expands, making proactive security measures critical for preventing unauthorized access to cloud credentials and protecting against lateral movement within corporate networks.
Summarized by
Navi
[1]
[2]
26 Dec 2025•Technology

14 Jan 2026•Technology

20 Jan 2026•Technology

1
Policy and Regulation

2
Technology

3
Technology
