A registry built by agents, for agents.
cache.directory runs on a content swarm — a coordinated pipeline of AI agents that find, evaluate, and document tools. The result is a schema-strict, machine-readable registry with a CLI that makes installation a single command.
The swarm pipeline
Each entry starts with an operator — an AI agent assigned to a specific shelf (skills, MCP servers, prompts, etc.). Operators run on a round-robin cron schedule, sourcing candidates from GitHub, npm, PyPI, and the MCP ecosystem.
When an operator finds a candidate, it produces a structured Markdown file with YAML frontmatter. That file must satisfy a Zod schema with required fields: name, description, author, updated, license, compatibility. Files that fail validation are quarantined — they never reach the build.
Operator searches GitHub / npm / MCP registries for tools matching the shelf criteria
Fetches README, source code, and usage examples. Runs compatibility checks against each agent runtime
Writes a structured Markdown file — description, tags, compat matrix, install command, safety flags
Zod schema check. Invalid entries go to quarantine; valid entries go to src/content/
Static site rebuild triggers on Cloudflare Pages. 45 pages built in ~3 seconds
The schema
Every entry is a Markdown file with typed YAML frontmatter. The schema enforces editorial consistency across the entire corpus — no freeform descriptions, no ambiguous compatibility claims.
name: "Bash Tool (Anthropic Computer Use)"
description: "Anthropic's official bash execution skill..."
category: "shell-execution"
tags: ["bash", "shell", "computer-use", "anthropic"]
author: "Anthropic"
updated: 2025-01-15
license: "MIT"
stars: 12400
installCommand: "npx cache add anthropic-bash"
sourceUrl: "https://github.com/anthropics/..."
compatibility:
claude-code: "full"
cursor: "none"
cline: "partial"
aider: "none"
sandbox_test:
verdict: "untested"
The compatibility object maps each supported agent runtime to one of four values: full, partial, none, or untested. This is the most important field — it tells you immediately whether a skill will work in your environment without having to try it.
The registry API
cache.directory publishes a fully static JSON API at build time. Every endpoint is a pre-rendered file served from Cloudflare's edge — zero server, instant global response.
GET /api/v1/skills.json All entries in the skills shelf GET /api/v1/resolve/:slug.json Full metadata for a single entry — what the CLI uses GET /api/v1/raw/:shelf/:slug Raw installable artifact (SKILL.md body, MCP config, etc.) GET /api/v1/search.json Flat index of all entries across all shelves GET /llms.txt LLMs.txt — machine-readable summary for AI assistants
All endpoints set Access-Control-Allow-Origin: * and appropriate Cache-Control headers. You can call the registry from any environment — browser, CLI, CI pipeline, AI agent — without authentication.
The cache CLI
@cache/cli is a zero-dependency Node.js CLI that turns the registry into a package manager for the agent layer. It resolves a slug, fetches the raw artifact, places it where your agent runtime expects it, and writes a deterministic lockfile.
Install behavior is shelf-aware:
MCP server configs are merged into your existing .mcp.json — the CLI never silently overwrites an existing server entry. If there's a key collision it tells you and exits.
All installs are recorded in cache.lock — a deterministic TOML-like lockfile with sha256 hashes that you commit to git for reproducible agent environments.
The safety layer
Every entry carries a sandbox verification verdict: verified, caveat, suspicious, flagged, or untested. We run each tool in an isolated Docker container and check:
- Does the install command actually work?
- Does the tool behave as documented?
- Does it make unexpected outbound network calls?
- Does it write to unexpected filesystem paths?
- Does it produce consistent outputs across a test prompt suite?
Verified entries show the Safety Verified badge and link to a side-by-side behavioral transcript: what the tool did vs what an unmodified baseline did. See all verified entries →
Machine access
cache.directory is designed to be consumed by agents as well as humans. The /llms.txt endpoint provides a plain-text summary of all entries formatted for LLM context windows. The API endpoints are all CORS-open and cacheable.
If you're building an agent that needs to discover and install tools, you can wire it directly to the registry:
# Resolve an entry
curl https://cache.directory/api/v1/resolve/anthropic-bash.json
# Fetch the raw SKILL.md
curl https://cache.directory/api/v1/raw/skills/anthropic-bash
# Search across all shelves
curl https://cache.directory/api/v1/search.json | \
jq '.entries[] | select(.name | test("pdf"; "i"))'