LangChain & LangGraph CVEs: Path Traversal and SQL Injection Expose Secrets in 84M-Download AI Frameworks
Two high-severity CVEs disclosed March 27, 2026 affect LangChain and LangGraph — the most widely used Python AI orchestration frameworks. CVE-2026-34070 enables arbitrary file reads via the prompt-loading API. CVE-2025-67644 allows SQL injection through the LangGraph SQLite checkpointer. If your AI application uses either library, here is what is exposed and how to remediate it immediately.
Dillip Chowdary
Founder & AI Researcher • March 28, 2026
Vulnerability Summary
- CVE-2026-34070 — LangChain path traversal in
langchain_core/prompts/loading.py, CVSS 7.5 (High) - CVE-2025-67644 — LangGraph SQLite checkpoint SQL injection via metadata filter keys, CVSS 7.3 (High)
- 52M / 23M / 9M weekly downloads for LangChain / LangChain-Core / LangGraph
- Data at risk: env files, Docker configs, .ssh keys, conversation history, databases
- Disclosed: March 27, 2026 — patch immediately
CVE-2026-34070: Path Traversal via Prompt-Loading API
The first vulnerability resides in langchain_core/prompts/loading.py, specifically in the function responsible for loading prompt templates from external paths. When a prompt template path is provided as user input — a common pattern in multi-tenant LLM applications and agent orchestration systems — the loading function performs no sanitization or boundary validation on the supplied path string.
An attacker who can influence the prompt template path parameter can supply a crafted path such as ../../etc/passwd, ../../.env, or ../../.ssh/id_rsa to read arbitrary files from the server's filesystem. Because LangChain applications are typically deployed with Python process permissions that include read access to the application directory and often broader system paths, the scope of accessible data is significant.
What Attackers Can Read
- .env files: API keys for OpenAI, Anthropic, AWS, database connection strings, JWT secrets — the typical contents of an AI application's environment configuration
- Docker and Kubernetes configs: Container registry credentials, cluster service account tokens, image pull secrets
- SSH private keys: If the application is deployed on a server with
~/.ssh/accessible, attacker gains lateral movement capability - Application source code: Business logic, proprietary prompt engineering, hardcoded credentials in configuration files
- Database connection configs: ORM settings, connection strings with embedded credentials
The CVSS score of 7.5 reflects that exploitation requires no authentication beyond the ability to supply a prompt template path — which in many LangChain deployments is a user-accessible parameter in chatbot interfaces, document processing pipelines, and agent configuration endpoints.
CVE-2025-67644: SQL Injection in LangGraph SQLite Checkpointer
The second vulnerability affects LangGraph's SQLite checkpoint implementation — the persistence layer that LangGraph uses to store agent conversation state, memory, and workflow checkpoints between invocations. LangGraph's checkpoint system is a foundational component: without it, stateful agents cannot persist memory across sessions.
The flaw exists in the metadata filter construction logic. When a caller passes metadata filter keys to query stored checkpoints, these keys are interpolated directly into SQL query strings without parameterization. An attacker who controls metadata filter key names can inject arbitrary SQL, enabling them to read, modify, or delete checkpoint data stored in the SQLite database.
The Injection Surface
LangGraph's checkpointer aget_tuple() and alist() methods accept a filter argument. In vulnerable versions, filter key names are concatenated directly into the WHERE clause:
Beyond data destruction, successful exploitation enables reading conversation histories from other users' agent sessions — a significant privacy violation in multi-tenant LangGraph deployments where multiple users share the same SQLite checkpoint store.
Scale of Exposure: Why This Is Systemic
The combined download numbers paint a picture of how deeply embedded these libraries are in the current AI application ecosystem. LangChain receives 52 million weekly downloads, LangChain-Core 23 million, and LangGraph 9 million. Because LangChain and LangGraph are used as core building blocks — not just direct dependencies — the downstream impact extends to every wrapper, integration, and managed platform built on top of them.
This systemic dependency means that many organizations running vulnerable LangChain code are not the teams that originally installed LangChain — they are using a higher-level product (a RAG framework, an agent toolkit, a managed LLM platform) that transitively depends on a vulnerable LangChain version. Supply chain awareness is critical here: assume that any Python AI application deployed in the last 18 months may be running a vulnerable version unless explicitly verified.
Downstream Impact Pattern
Libraries built on LangChain include LlamaIndex integrations, AutoGPT backends, LangServe deployments, and dozens of commercial RAG platforms. If your team did not write a LangChain import statement, you may still be running it. Run pip show langchain langchain-core langgraph in every production environment to audit installed versions immediately.
Affected Versions and Remediation
According to the disclosure, all versions of langchain-core prior to the patched release and all versions of langgraph prior to the patched SQLite implementation are vulnerable. Maintainers released patches the same day as the disclosure (March 27, 2026).
Immediate Remediation Steps
Architectural Lessons for AI Application Security
These CVEs expose two anti-patterns that are endemic to the rapid-prototyping culture that dominates AI application development. The first is trusting user-supplied paths for resource loading. In web development, path traversal has been a well-understood, fully mitigated class of vulnerability for over a decade. The fact that it appears in a library with 52 million weekly downloads in 2026 reflects how much AI application development is being done by teams without classical web security backgrounds.
The second anti-pattern is dynamic SQL construction without parameterization. SQL injection has been OWASP Top 10 since 2003. Its appearance in a persistence layer used by millions of AI agent deployments is a stark reminder that security fundamentals do not automatically transfer to new application paradigms. LangGraph's checkpointer is architecturally analogous to a session store — a component that any web developer would treat as a high-security boundary. It was not treated that way here.
For teams building on LangChain and LangGraph, the remediations above address the immediate vulnerabilities. The broader lesson is to apply the same threat modeling to AI application components that you would apply to any backend service: untrusted input should never touch filesystem paths or SQL strings without sanitization, regardless of how quickly the rest of the stack was assembled.
What to Check in Your Deployments
- Audit all
load_prompt()call sites — flag any where the path argument derives from request parameters, environment configs, or database values - Audit LangGraph filter key sources — ensure filter keys passed to
aget_tuple(),alist(), andaput()are static strings, not user-controlled values - Check transitive dependencies — use
pip-auditorsafety checkto identify vulnerable versions in the full dependency tree - Review deployed container images — images built before March 27, 2026 contain vulnerable versions; rebuild and redeploy all affected containers
- Rotate secrets in exposed environments — if any production deployment ran a vulnerable version with user-accessible prompt loading, treat all secrets in that environment as compromised