LangChain on the AI Trust Index
We scored every LangChain service across 6 trust dimensions and 23 sub-metrics. Here's what we found — and why the most important framework in the AI agent ecosystem is getting most things right.
If you're building AI agents in 2026, there's a high chance LangChain is somewhere in your stack. The framework and its ecosystem — LangGraph, LangSmith, langchain-core, the MCP adapters — have become foundational infrastructure for how production AI systems get built, tested, and deployed.
That's exactly why we chose LangChain as our first Featured Publisher on the Fabric Layer AI Trust Index.
Not because they're perfect. Because they're important. When a framework reaches the scale where hundreds of millions of installs depend on it, the question stops being "is it useful?" and becomes "is it safe to build on?" We wanted to answer that question with data.
Harrison is right. And that's precisely why the tools your agent depends on need to be scored before they get there.
What we scored
The AI Trust Index evaluates every service across 6 trust dimensions: Security Posture, Operational Health, Maintenance Activity, Licensing & Compliance, Community Trust, and Identity & Provenance. Each dimension contains multiple sub-metrics, totalling 23 discrete signals. The result is a score out of 5.00 that maps to one of three decisions: allow, review, or block.
For LangChain, we scored every service published under the LangChain organisation — from the core framework packages to their ecosystem of integrations, adapters, and platform tools. This includes:
Every single one of these is now searchable, scoreable, and linkable on the Trust Index. If you're evaluating LangChain as a dependency, you can see every score at trust.fabriclayer.ai/?publisher=LangChain.
The headline: LangChain scores well
We'll say it plainly: LangChain's ecosystem performs above average across our Trust Index. The majority of their services land in the allow range. For a framework with this surface area, that's notable.
LangChain's core packages — langchain, langchain-core, langgraph, and langsmith — show strong maintenance velocity, active security response, and transparent governance. This is exactly what you want to see from infrastructure you're building production systems on.
But aggregate scores tell one story. The details tell a more interesting one.
Where LangChain excels
Maintenance velocity
LangChain ships. The release cadence across their core packages is among the highest we've tracked in the AI tooling ecosystem. langchain-core alone has pushed dozens of releases in the last quarter, with langsmith and langgraph on similarly aggressive cycles. When we measure time between commits, release frequency, and contributor activity, LangChain consistently outpaces comparable frameworks.
This matters because maintenance velocity is one of the strongest predictors of security posture. When a critical vulnerability drops, you want to know the team behind the package can respond in hours, not weeks.
Security response: the LangGrinch test
In December 2025, security researcher Yarden Porat disclosed CVE-2025-68664 — codenamed LangGrinch — a critical serialisation injection vulnerability in langchain-core itself. CVSS score: 9.3 out of 10. This wasn't a community plugin edge case. It was in the core serialisation path, affecting the dumps() and dumpd() functions that millions of production deployments touch every day.
LangChain's response was decisive. Patches landed in versions 1.2.5 and 0.3.81. The fixes went beyond the immediate vulnerability — they changed default settings to be restrictive by design, added allowlist parameters for deserialisation, blocked Jinja2 templates by default, and flipped the secrets_from_env parameter from True to False. The project awarded Porat their maximum-ever bounty of $4,000.
A critical CVE in core infrastructure is not automatically a trust failure. What matters is response time, remediation depth, and whether the fix strengthens the system beyond the original vulnerability. LangChain passed this test.
A parallel vulnerability in LangChain.js (CVE-2025-68665, CVSS 8.6) was patched simultaneously, demonstrating cross-ecosystem coordination — another trust signal we track.
Community trust & adoption
LangChain's adoption signals are hard to argue with. The framework is used in production by LinkedIn, Uber, Klarna, GitLab, Replit, Cloudflare, Rippling, and Workday, among others. LangSmith holds SOC 2 Type 2, HIPAA, and GDPR compliance. The GitHub ecosystem spans tens of thousands of stars across repositories, and the community maintains one of the most active developer forums in the AI space.
On our Community Trust dimension, this translates into strong scores. Adoption breadth, enterprise references, and community engagement all factor positively.
Where it gets more nuanced
The surface area problem
LangChain's ecosystem is massive. And that's both a strength and a risk surface.
The core packages — langchain, langchain-core, langgraph, langsmith — are well-maintained, actively patched, and organisationally governed. But as you move further out into the integration layer — community packages, provider-specific adapters, checkpoint implementations — the maintenance picture becomes less uniform.
This is not unique to LangChain. It's a pattern we see across every large open-source ecosystem. But it matters here because LangChain's design philosophy encourages composition: you typically don't just install langchain. You install langchain + langchain-openai + langgraph + langchain-community + whatever vector store adapter you need. Each additional dependency is an additional trust decision.
The MCP frontier
LangChain's MCP adapters deserve special mention. As MCP adoption accelerates — and with it, the number of unvetted tool servers that agents can connect to — LangChain is increasingly the bridge between agent logic and external tool execution.
The langchain-mcp-adapters package (Python and JS) converts MCP tools into LangChain-compatible tools, enabling agents to dynamically load and execute tools from any MCP server. This is powerful. It's also a trust amplifier: the security posture of your LangChain agent now depends on the security posture of every MCP server it connects to.
This is exactly the kind of chain-of-trust problem the Fabric Layer Trust Index is designed to make visible. When you score a LangChain service, you're scoring one node in a much larger dependency graph. The next question is always: what else is this connected to?
The broader context: why this matters now
The 2026 OSSRA report from Black Duck, published last week, found that open-source vulnerabilities per codebase have surged 107% — to an average of 581 per application. 93% of codebases are running components with zero development activity in two or more years. Only 7% of components are on the latest version.
AI agents are building on top of this ecosystem. LangChain itself sits at the intersection — a framework that connects LLMs to the broader open-source world. Every tool call, every MCP server, every community integration is a path into that dependency graph.
The fact that LangChain's core infrastructure scores well on our Trust Index is a meaningful signal. It means the foundation is solid. But the edges — where your agent meets the open-source ecosystem — need the same level of scrutiny.
What this means for builders
If you're building on LangChain — and statistically, you probably are — here's what our analysis suggests:
Trust the core. Verify the edges. langchain-core, langgraph, and langsmith are well-maintained and actively governed. But every additional package you pull in — integrations, community modules, MCP servers — deserves its own trust evaluation. Use the Trust Index to check each one before it enters your dependency tree.
This is not about fear. It's about visibility. Harrison Chase has been saying it himself: agents are non-deterministic by nature. You can't predict step 14 from step 1. What you can do is ensure every tool your agent might reach for has been independently evaluated before it gets the chance.
About this report
This is the first in a series of Featured Publisher reports from Fabric Layer. We select publishers whose services are widely depended on across the AI ecosystem, and provide an independent trust analysis designed to be useful — not punitive.
We believe that trust scoring should make good tools more visible, not less. If you build a good tool, you get a good score. The data is free, the methodology is transparent, and the goal is simple: give every team building with AI agents the visibility they need to make informed trust decisions.
LangChain is doing the work. The scores reflect that. We'll continue monitoring and updating as their ecosystem evolves.