Trust Index APIComing soon Blog Docs
All posts
March 2026 · 6 min read

Introducing the Fabric Trust Index, 23 signals scoring 5,800+ AI services

Introducing the Fabric Trust Index — 23 signals scoring 5,800+ AI services

The Fabric Trust Index is now in public beta — 23 sub-signals across 6 dimensions scoring 5,800+ AI services, SDKs, agent tools, and MCP skills. Full transparency, no defaults, actively evolving. No account required.

Why we built this

The AI ecosystem is scaling faster than anyone can audit. New tools launch every week. Anyone can build one. Most are never reviewed for safety, privacy, or reliability. A slick landing page doesn't mean a tool is safe, and a tool that was safe last month can push a bad update, change its privacy policy, or get acquired by someone with different intentions — and nobody tells you.

Right now, the answer to "is this tool safe?" is mostly vibes. Download counts. GitHub stars. A README that looks professional. None of these are trust signals — they're popularity signals, and they're trivially gamed.

We think the ecosystem deserves better. So we built it.

23 sub-signals. Six dimensions. One score.

Every service in the index receives a trust score from 0.00 to 5.00, computed from 23 independently scored sub-signals grouped into six weighted dimensions. The dimensions are weighted by risk — security carries more weight than popularity — and the granularity makes gaming the system extremely difficult. An attacker would need to simultaneously spoof all 23 sub-signals to pass.

Each sub-signal is scored independently, and when a sub-signal lacks data, its weight is redistributed proportionally to sub-signals that do have data. No defaults, no faking coverage — every score reflects exactly what we can verify.

Vulnerability & Safety25%
CVE severity, patch availability, dependency tree depth, malware signatures, and typosquatting indicators.
5.0
Known CVEs40%CVE count and severity scoring
Dependency Health30%Total dependency count evaluation
Supply Chain30%Supply chain presence and transitive CVE penalties
Operational Health15%
Uptime, latency, error rates, and incident history.
4.4
Uptime35%30-day rolling uptime percentage
Response Latency25%p99 response time in milliseconds
Error Rate20%Percentage of non-2xx status codes
Incident History20%Incidents in last 90 days
Maintenance Activity15%
Commit recency, release cadence, and issue responsiveness.
4.2
Commit Recency30%Days since last commit
Release Cadence25%Days since last release
Issue Response20%Median time to close issues
CI/CD Presence25%GitHub Actions workflows present
Adoption15%
Download volume, GitHub stars, and growth trends.
3.0
Download Volume30%npm + PyPI weekly downloads
GitHub Stars25%Stargazers count from GitHub
Dependents30%Downstream dependent packages
Growth Trend15%Week-over-week download velocity
Transparency15%
Open source status, documentation, security policy, and changelog.
3.8
Open Source30%Public repo and license type
Documentation25%README quality and docs presence
Security Policy20%SECURITY.md file presence
Changelog25%CHANGELOG.md and release history
Publisher Trust15%
Track record, org maturity, and cross-platform identity.
4.3
Track Record30%Sibling service scores and GitHub credibility
Org Maturity30%GitHub account age and organization type
Community Standing20%Public repository count
Cross-Platform20%Presence on GitHub, npm, and PyPI

Services score into three tiers: Trusted (3.00–5.00), Caution (1.00–2.99), and Blocked (0.00–0.99). For AI agents, every service includes a single decision field — allow, review, or block — so an agent can route without parsing scores.

Why sub-signals matter

The original scoring engine used six flat signals. It worked, but it was a black box — you could see that "Maintenance Activity" scored 3.2 but not why. The sub-signal architecture breaks each dimension into independently verifiable components. On any service page, you can expand each signal to see exactly which sub-signals contributed to the score, which had data, and which didn't.

When a sub-signal lacks data (e.g., no PyPI downloads for an npm-only package), its weight doesn't default to zero or some placeholder — it redistributes proportionally to the sub-signals that do have data. This means a service scored with 3 of 4 sub-signals isn't penalised for missing data. It's scored accurately on what we can verify.

No defaults. No faking coverage.

Every sub-signal score reflects real, verified data. If we can't verify it, we redistribute the weight — we don't guess.

Supply chain protection

The most dangerous attack vector in AI isn't a suspicious new tool — it's a trusted one that changes hands. A repo transfer is the most common vector for supply chain attacks. The attacker transfers a popular repo to their account, publishes a new version with malicious code, and the package manager trusts it because the repo looks the same.

Fabric detects this. Our override system evaluates hard safety rules after every scoring cycle, including three designed for supply chain integrity:

These overrides are hard caps. No combination of high signal scores can bypass them.

CVE detection in under five minutes

Fabric runs a two-tier CVE scanning system: a full scan every hour with immediate rescore on significant drops, and a fast-path scanner every five minutes checking OSV.dev delta feeds. Worst case from CVE publication to score update: five minutes. CVEs are classified by severity and patch status — a critical unpatched CVE is penalised 20× more than a low patched one — and the collector traces the full dependency tree, not just direct dependencies.

Eight discovery sources

The index grows daily through eight automated discovery sources. Registry crawlers search npm and PyPI by AI-relevant keywords. A news scanner checks GitHub Trending, Product Hunt, YC Launches, and Show HN for day-one catches. ClawHub skills and MCP servers from Smithery.ai and mcp.so are crawled separately. A curated watchlist catches platform-only products like Cursor, Perplexity, and Bolt.new that have no package to find in a registry.

A new AI tool launched on Product Hunt at 9am can be discovered, enriched with metadata, scored across all 23 sub-signals, and assessed — all by the next morning. No human intervention at any step.

5,800+
Services indexed
23
Sub-signals scored
10
Data sources
<5m
CVE detection

Fully automated. No manual reviews.

Every score is computed from real-time data pulled from public APIs. No human reviewers, no pay-to-play, no editorial bias. A provider doesn't need to opt in — or even know they're being scored — for Fabric to evaluate them.

The only way to improve a score is to improve the underlying signals.

Fix vulnerabilities. Maintain the project. Improve uptime. Verify your identity. Publish documentation. There is no other path.

Data flows in from 10 sources: OSV.dev for vulnerabilities, GitHub API for maintenance and transparency, npm and PyPI for adoption, VirusTotal for malware scanning, ClawHub for AI skill analysis, HTTP pings for operational health, and Claude for AI-generated trust assessments that explain scores in plain English.

This is a beta

The scoring engine is in active beta. Signals, sub-signal weights, and thresholds are being calibrated in real time as coverage expands across the index. Scores may shift between versions as new data sources come online and scoring logic is refined.

What's live now (Phase 1): the core sub-signal architecture — 23 sub-signals across 6 dimensions with full weight redistribution for missing data. What's next (Phase 2): permission scope analysis, expanded collection sources, publisher verification, and a public API for programmatic access.

The full scoring methodology, data sources, and technical architecture are documented at fabriclayer.ai/docs. Everything is transparent because trust infrastructure should be.

Work with us

If you have feedback, want to integrate, or want to discuss the scoring methodology — reach out at hello@fabriclayer.ai, on GitHub, or on X.

Is this AI tool safe to use?

Search 5,800+ AI services. 23 sub-signals. Free. No account required.

Browse the Trust Index →