Introducing the Fabric Trust Index, 23 signals scoring 5,800+ AI services
The Fabric Trust Index is now in public beta — 23 sub-signals across 6 dimensions scoring 5,800+ AI services, SDKs, agent tools, and MCP skills. Full transparency, no defaults, actively evolving. No account required.
Why we built this
The AI ecosystem is scaling faster than anyone can audit. New tools launch every week. Anyone can build one. Most are never reviewed for safety, privacy, or reliability. A slick landing page doesn't mean a tool is safe, and a tool that was safe last month can push a bad update, change its privacy policy, or get acquired by someone with different intentions — and nobody tells you.
Right now, the answer to "is this tool safe?" is mostly vibes. Download counts. GitHub stars. A README that looks professional. None of these are trust signals — they're popularity signals, and they're trivially gamed.
We think the ecosystem deserves better. So we built it.
23 sub-signals. Six dimensions. One score.
Every service in the index receives a trust score from 0.00 to 5.00, computed from 23 independently scored sub-signals grouped into six weighted dimensions. The dimensions are weighted by risk — security carries more weight than popularity — and the granularity makes gaming the system extremely difficult. An attacker would need to simultaneously spoof all 23 sub-signals to pass.
Each sub-signal is scored independently, and when a sub-signal lacks data, its weight is redistributed proportionally to sub-signals that do have data. No defaults, no faking coverage — every score reflects exactly what we can verify.
Services score into three tiers: Trusted (3.00–5.00), Caution (1.00–2.99), and Blocked (0.00–0.99). For AI agents, every service includes a single decision field — allow, review, or block — so an agent can route without parsing scores.
Why sub-signals matter
The original scoring engine used six flat signals. It worked, but it was a black box — you could see that "Maintenance Activity" scored 3.2 but not why. The sub-signal architecture breaks each dimension into independently verifiable components. On any service page, you can expand each signal to see exactly which sub-signals contributed to the score, which had data, and which didn't.
When a sub-signal lacks data (e.g., no PyPI downloads for an npm-only package), its weight doesn't default to zero or some placeholder — it redistributes proportionally to the sub-signals that do have data. This means a service scored with 3 of 4 sub-signals isn't penalised for missing data. It's scored accurately on what we can verify.
Every sub-signal score reflects real, verified data. If we can't verify it, we redistribute the weight — we don't guess.
Supply chain protection
The most dangerous attack vector in AI isn't a suspicious new tool — it's a trusted one that changes hands. A repo transfer is the most common vector for supply chain attacks. The attacker transfers a popular repo to their account, publishes a new version with malicious code, and the package manager trusts it because the repo looks the same.
Fabric detects this. Our override system evaluates hard safety rules after every scoring cycle, including three designed for supply chain integrity:
- Archived repositories — if a GitHub repo is archived, the service is blocked at 0.99 regardless of other signals.
- Deprecated packages — npm packages with the deprecated flag are blocked. The maintainer is telling you to stop using it.
- Ownership transfers — Fabric distinguishes between same-owner renames (no penalty) and different-owner transfers (score frozen, –1.0 penalty, flagged for review). Without this, the engine would see better metrics on the new owner's repo and increase the score — the opposite of correct.
These overrides are hard caps. No combination of high signal scores can bypass them.
CVE detection in under five minutes
Fabric runs a two-tier CVE scanning system: a full scan every hour with immediate rescore on significant drops, and a fast-path scanner every five minutes checking OSV.dev delta feeds. Worst case from CVE publication to score update: five minutes. CVEs are classified by severity and patch status — a critical unpatched CVE is penalised 20× more than a low patched one — and the collector traces the full dependency tree, not just direct dependencies.
Eight discovery sources
The index grows daily through eight automated discovery sources. Registry crawlers search npm and PyPI by AI-relevant keywords. A news scanner checks GitHub Trending, Product Hunt, YC Launches, and Show HN for day-one catches. ClawHub skills and MCP servers from Smithery.ai and mcp.so are crawled separately. A curated watchlist catches platform-only products like Cursor, Perplexity, and Bolt.new that have no package to find in a registry.
A new AI tool launched on Product Hunt at 9am can be discovered, enriched with metadata, scored across all 23 sub-signals, and assessed — all by the next morning. No human intervention at any step.
Fully automated. No manual reviews.
Every score is computed from real-time data pulled from public APIs. No human reviewers, no pay-to-play, no editorial bias. A provider doesn't need to opt in — or even know they're being scored — for Fabric to evaluate them.
Fix vulnerabilities. Maintain the project. Improve uptime. Verify your identity. Publish documentation. There is no other path.
Data flows in from 10 sources: OSV.dev for vulnerabilities, GitHub API for maintenance and transparency, npm and PyPI for adoption, VirusTotal for malware scanning, ClawHub for AI skill analysis, HTTP pings for operational health, and Claude for AI-generated trust assessments that explain scores in plain English.
This is a beta
The scoring engine is in active beta. Signals, sub-signal weights, and thresholds are being calibrated in real time as coverage expands across the index. Scores may shift between versions as new data sources come online and scoring logic is refined.
What's live now (Phase 1): the core sub-signal architecture — 23 sub-signals across 6 dimensions with full weight redistribution for missing data. What's next (Phase 2): permission scope analysis, expanded collection sources, publisher verification, and a public API for programmatic access.
The full scoring methodology, data sources, and technical architecture are documented at fabriclayer.ai/docs. Everything is transparent because trust infrastructure should be.
Work with us
If you have feedback, want to integrate, or want to discuss the scoring methodology — reach out at hello@fabriclayer.ai, on GitHub, or on X.
Is this AI tool safe to use?
Search 5,800+ AI services. 23 sub-signals. Free. No account required.
Browse the Trust Index →