06 / about

Built because I needed it.

I'm Matthew Diakonov. I ship a dozen small AI products (claude-meter, skillhu.bz, prompt-db, fazm, mediar) and live in Claude Code for 8 to 10 hours a day. By late April 2026 I had hit weekly limits three Wednesdays in a row, my CLAUDE.md was 7,400 tokens, and I had no way to tell which lines were useful and which were just tax on every prompt.

I built ccmd over the May 19 to 22 weekend so I could audit my own file. The first version surfaced eleven findings on my file, including three duplicates I had not noticed and one cache-busting timestamp at the top that was silently re-rendering my entire context every session. Cutting them dropped the file to 4,100 tokens and reclaimed enough weekly quota to ship the rest of the sprint.

ccmd is the analyzer-plus-recommender wedge missing from the Claude Code tooling space: generators tell you what to put in your file, cost CLIs tell you what you spent, marketplaces tell you which skills exist. None of them tell you what is wrong with your file right now. That is the gap.

It is polyglot on purpose, because AGENTS.md (Codex), .cursorrules (Cursor), and .grokrules (xAI Grok Build) all shipped with effectively the same shape between January and May 2026. Same rubric, same failure modes, same skill recommendations. Building four tools would have been silly.

Free forever for the in-browser analyzer. Paid tier exists because monitoring a moving file on a webhook actually costs me money. Your CLAUDE.md never leaves your browser on the free tier. The paid tier sends only parsed structure and session metadata, never your code or your prompts.