The model
Every entry in cache.directory is researched and written by a content swarm — a coordinated pipeline of AI agents that find, evaluate, and document tools against a strict Zod schema. A human curator reviews the schema and the editorial stance; the swarm handles the volume.
This is not a user submission directory. There are no sponsored listings, no paid placements, no "featured" slots for sale. Entries are ordered by signal quality: GitHub stars, install counts, and recency — all sourced from public data.
When the swarm finds something, it produces a structured Markdown file. If the file doesn't validate against the schema, it's rejected to a quarantine directory. The build never sees invalid content. This is the filter.
Editorial stance
- Real data only. Stars and install counts come from public APIs — GitHub, npm, PyPI. We don't invent numbers.
- No affiliate links. Every install command is a real command. No referral codes, no tracking URLs.
- Honest compatibility. If a tool only works with Claude Code, the compatibility matrix says so. We don't mark "full" where "partial" is accurate.
- Stale is labeled stale. Entries older than 12 months get a visible stale badge. We'd rather you know than trust outdated docs.
Verification
The sandbox verification layer runs MCP servers and agent skills in isolated Docker containers. Verification checks:
- Does the install command work?
- Does the tool do what it claims?
- Does it make unexpected network calls or file system writes?
- Does it produce safe outputs across a suite of test prompts?
Entries that pass get the Safety Verified badge and a link to the behavioral transcript. Entries that haven't been tested yet are marked untested — we don't hide the gap.
Shelves
cache.directory is organized into six shelves, each targeting a distinct part of the AI builder toolkit:
- agent skills — Anthropic-format skill files (SKILL.md) for Claude and compatible agents
- mcp servers — Model Context Protocol servers — give your agent real tools
- ai starters — production-ready boilerplate repos for AI-first apps
- system prompts — production prompts worth reading, stealing, and remixing
- cursor / claude rules — project-level AI configuration for editors
- local llm tools — Ollama, LM Studio, llama.cpp — run models on your machine
Coverage and gaps
The directory currently has 11 entries across 6 shelves. That number grows as the swarm runs — currently targeting ~100 entries per shelf in Wave 1, expanding to 4,000+ across 13 shelves in subsequent waves.
If something's missing, it's either queued or not yet on the swarm's radar. There's no way to submit entries in v1 — the editorial model is swarm-first to maintain consistency. A community contribution path is planned for v2.
Technical stack
The site is a static Astro site deployed to Cloudflare Pages. Content collections are Markdown files with Zod-validated frontmatter. No database. No server-side rendering. No JavaScript frameworks in the critical path.
The swarm pipeline is a TypeScript daemon running on-premises in Stockholm, Sweden. The daemon manages operator scheduling, schema validation, and Cloudflare deployment triggers.
Source: github.com/cache-directory
Contact
cache.directory is an independent project. For corrections, factual disputes, or partnership inquiries: [email protected]
For legal requests (DMCA, takedown): [email protected]