Skip to content

Design Rationale

FieldValue
Date2026-02-14
StatusDraft
AuthorsAllen R.

The Claude Code plugin ecosystem works, but it has pain points that compound as adoption grows. This design document captures the reasoning behind ccpkg — a package format and toolchain designed to address three specific problems.

Plugins today are installed from GitHub repositories. This means every installation is a git clone that pulls whatever happens to be on the default branch. There is no version pinning, no integrity verification, and no dependency vendoring. Plugins break silently when authors push changes. Installations fail when transitive dependencies shift. Lifecycle management (install, update, remove) is fragile because there is no formal contract between the plugin and the host.

Every session start fetches plugin state from GitHub repos. For users with multiple plugins, this adds noticeable latency — sometimes minutes — before Claude Code is ready to use. Nothing in the current model supports deferred loading; everything is eager.

There is no quality signal for plugins. Finding good extensions means word-of-mouth or browsing fragmented GitHub repos. There is no curation, no verification, and no structured metadata to filter by. Users cannot distinguish maintained, tested plugins from abandoned experiments.

MCP server configurations (.mcp.json) end up buried inside plugin cache directories. Configuring auth tokens, environment variables, and secrets requires users to locate and edit files in opaque paths. There is no declarative config model — every plugin reinvents configuration.


Before designing ccpkg, we studied six existing systems that each solve a subset of these problems. The goal was not to copy any one of them but to understand what patterns are proven and which tradeoffs matter.

The MCP Bundle format packages MCP servers as ZIP archives containing a manifest.json and vendored dependencies. The key insight is that self-contained archives eliminate dependency hell. One file, one install, zero post-install steps. The .mcpb format proves that one-click install for MCP servers is achievable and that users strongly prefer it over multi-step setup.

What it does not solve: .mcpb is scoped to MCP servers only. It has no concept of skills, hooks, commands, or the broader plugin surface. It also has no discovery mechanism or version management.

The .vsix format is a ZIP archive containing package.json, bundled node_modules, and extension code. VS Code supports both marketplace install and manual .vsix sideloading. Extension packs allow bundling multiple extensions.

Key takeaways: the marketplace + manual install duality works well. Authors publish to the marketplace for discoverability; power users sideload for development or private extensions. The package.json manifest carrying both metadata and activation events is a clean pattern. Extension packs demonstrate that bundles-of-bundles have real demand.

What we learned to avoid: .vsix build tooling (vsce) is heavyweight. The activation event system is complex. We want something simpler.

Homebrew uses Git repositories as distribution channels. A “tap” is just a repo containing “formulae” (Ruby DSL files that describe how to fetch, build, and install packages). Zero infrastructure required for authors — you push a repo, users brew tap it.

Key takeaway: decentralized distribution via Git repos is powerful. No central authority needed. Anyone can host a tap. The formula-as-manifest pattern is elegant but Ruby DSL is too opinionated for our use case. JSON manifests are more portable.

The Neovim plugin manager that changed how the ecosystem thinks about startup performance. Two ideas stand out:

  1. Lazy loading: Plugins are registered at startup but their content is not loaded until triggered (by command, filetype, event, or key mapping). This transforms O(n) startup into O(1) + on-demand.
  2. Lockfile: lazy-lock.json pins every plugin to an exact commit hash. git pull + lockfile = reproducible environment. The lockfile is committable, so teams share identical plugin state.

Also notable: lazy.nvim supports local development paths (dir = "~/projects/my-plugin") alongside remote sources. This is essential for plugin authors.

Terraform’s registry protocol is an HTTP API that returns JSON responses containing version lists, download URLs, platform-specific checksums, and documentation links. Private registries use the same protocol.

Key takeaway: the registry is just a JSON API. No magic, no special infrastructure. A static JSON file on GitHub Pages can implement the protocol. This proves that decentralized, low-infrastructure registries work at scale. The checksum-per-platform pattern is relevant for archives that might differ across OS/arch.

The open specification for SKILL.md files. Originally perceived as Claude-specific, research revealed that SKILL.md with YAML frontmatter is broadly adopted across mainstream AI coding assistants — Gemini CLI, Codex, GitHub Copilot, OpenCode, Cursor, and twenty-plus others. The format uses progressive disclosure: YAML frontmatter for machine-readable metadata, markdown body for the full skill definition.

This finding significantly expanded our portability ambitions. Skills are not a Claude-specific extension point — they are a cross-tool standard.


We evaluated three approaches, each building on the prior art differently.

Direct port of the .mcpb archive pattern to cover all plugin types (skills, hooks, commands, agents). Simple: ZIP + manifest + vendored content.

Pros: Minimal design surface. Proven pattern. Fast to implement.

Cons: Does not address startup performance (still eager loading). No discovery or trust mechanism. No version management beyond “replace the archive.”

Bundles (like Approach A) plus a manifest index file and a lockfile for version pinning. A local registry file lists known packages with versions and checksums.

Pros: Adds reproducibility via lockfile. Enables basic discovery through the index.

Cons: More complex than A without fully solving the problems. The registry is local-only, so discovery is still limited. No deferred component loading.

Archive bundles + lockfile + optional decentralized registries + checksum verification.

Pros: Directly targets all three pain points. Builds on the proven mcpb archive pattern. Borrows lazy.nvim’s best ideas (deferred loading + lockfiles). Keeps registries optional and decentralized (Homebrew tap model meets Terraform registry protocol). Checksum verification adds the trust layer.

Cons: Largest design surface. More to implement. But the complexity maps 1:1 to real problems — there is no accidental complexity here.

Selected Approach C. The additional complexity over A and B is justified because each added component (lockfile, registry protocol, checksum verification) directly eliminates a specific user pain point.


The package format is a ZIP archive with the .ccpkg extension.

Why ZIP, not tarball? ZIP has universal tooling support across every platform and language. Both .mcpb and .vsix use ZIP. Users can inspect packages with any ZIP tool. Tarballs require a separate decompression step (gzip/bzip2/xz) and are less friendly on Windows.

Why self-contained? The archive contains all dependencies vendored inside. There is no post-install npm install, no runtime network fetches, no build steps. This is the core lesson from .mcpb: if the package is not self-contained, it is not reliable.

Internal structure: The archive mirrors the .claude/ directory structure so extraction maps naturally to the plugin layout. manifest.json sits at the archive root (not hidden inside a subdirectory). This makes inspection trivial — unzip -l package.ccpkg shows the manifest immediately.

2. The plugin IS a plugin (four-type hybrid architecture)

Section titled “2. The plugin IS a plugin (four-type hybrid architecture)”

ccpkg itself is a Claude Code plugin. It uses all four extension types available in the Claude Code plugin system:

  • Skills for interactive, agentic operations (init wizard, search, describe). These are conversational — Claude adapts its responses based on context.
  • Commands for deterministic operations (/ccpkg:install, /ccpkg:pack, /ccpkg:verify). Same input, same behavior, every time.
  • Hooks for enforcement (integrity verification on install). These run without LLM interpretation.
  • Scripts (Node.js) for heavy lifting (ZIP handling, SHA256 checksums, lockfile resolution, cache management). Hooks and commands delegate to scripts for anything computationally intensive.

Why this split? Each extension type has a natural role. Using the wrong type creates friction: a skill that should be deterministic frustrates users with inconsistent behavior. A command that should be conversational cannot adapt to context. A hook that invokes the LLM adds latency to every session start. The four-type split matches each operation to its natural execution model.

Why is ccpkg itself a plugin? Self-referential design means the packaging tool validates its own format. If ccpkg cannot package itself, something is wrong with the format. It also means users install the package manager the same way they install any package — no special bootstrap.

Packages install to one of two locations:

  • User scope: ~/.ccpkg/plugins/ — available in all projects for this user.
  • Project scope: {project-root}/.ccpkg/plugins/ — available only in this project, committable to version control.

Resolution order: Explicit flag (--user or --project) wins. If no flag, the manifest’s scope hint is used. If no hint, default is user scope.

Why user-wins? The user is the one who has to live with where the package lands. Author hints are suggestions, not mandates. A team-shared linting plugin might suggest project scope, but an individual user might prefer it globally.

Per-scope lockfiles: Each scope has its own lockfile. The project lockfile ({project-root}/.ccpkg/ccpkg-lock.json) is committable and shareable — team members get identical package versions. The user lockfile (~/.ccpkg/ccpkg-lock.json) is personal.

Plugins frequently need configuration: API keys, file paths, feature flags, server URLs. Today this is ad-hoc. ccpkg formalizes it.

Typed config slots: The manifest declares configuration with typed slots:

{
"config": {
"API_KEY": { "type": "secret", "required": true, "description": "Service API key" },
"MAX_RESULTS": { "type": "number", "default": 10 },
"OUTPUT_FORMAT": { "type": "enum", "values": ["json", "text"], "default": "json" }
}
}

Supported types: secret, string, number, boolean, enum, path.

Install-time prompting: When a package is installed, required config values without defaults are prompted from the user. This happens once, at install time, not at every session start.

Separation of storage: Config values are stored in settings.json under a packages.{name} namespace — not in the package cache directory. This means uninstalling a package does not destroy configuration, and users can find all their settings in one predictable location.

Template substitution: .mcp.json and .lsp.json files inside the archive are templates. ${config.VARIABLE_NAME} references are resolved against stored config at load time. This completely solves the “buried .mcp.json” problem — users configure at install time, the template handles the wiring.

MCP servers are a critical part of the plugin ecosystem. ccpkg supports three modes of including them:

  1. Traditional: command + args + env. The standard way MCP servers are configured — point to a binary or npm package with environment variables.
  2. Embedded mcpb: A .mcpb bundle ships inside the .ccpkg archive. The ccpkg extracts and registers it. Self-contained within self-contained.
  3. Referenced mcpb: A URL + SHA256 checksum pointing to an external .mcpb file. Downloaded and verified at install time.

All three modes use the same ${config.*} variable substitution for environment variables, secrets, and paths.

Why three modes? Different MCP servers have different distribution needs. A small, purpose-built server fits neatly inside the archive (embedded). A large, independently-versioned server is better referenced externally. A server distributed as an npm package or binary uses the traditional model. Supporting all three means package authors pick what fits, not what the format forces.

The ccpkg is the envelope, not the server: The .ccpkg file is the delivery mechanism and configuration layer. It does not replace or wrap the MCP server — it packages and configures it.

Early in the design, we assumed skills and some other components would be Claude-specific. Research corrected this assumption.

Key finding: SKILL.md is broadly adopted across mainstream AI coding assistants. Claude Code, Gemini CLI, Codex, GitHub Copilot, OpenCode, Cursor, and twenty-plus others all support the format. This is not a Claude-specific extension point — it is a cross-tool standard.

Universal core (works across tools without modification):

  • ZIP archive + manifest.json (container format)
  • Config model (typed slots, env vars, secrets)
  • Skills (SKILL.md with YAML frontmatter)
  • MCP servers (open standard)
  • LSP servers (industry standard)

Near-universal (same concept, different filenames):

  • Instruction files: CLAUDE.md, AGENTS.md, copilot-instructions.md, GEMINI.md
  • Slash commands (most tools have some form)

Tool-specific (thin adapter needed):

  • Hook event names (PreToolUse, PostToolUse are Claude-specific)
  • Agent invocation mechanics (subagent spawning varies by tool)

Design decision: Universal core from day one, with a thin adapter layer only where tools truly diverge. An instructions/mappings.json file maps the canonical instruction file to each tool’s expected filename. This maximizes adoption potential — a ccpkg package is not a “Claude Code package,” it is a coding assistant package that works best with Claude Code.

This directly addresses the “minutes to start a session” problem. The design borrows from lazy.nvim’s proven approach.

At session start:

  • Read the lockfile (fast, local file)
  • Load ONLY manifest metadata: name, description, component list
  • Register hooks (but do not execute them)
  • Register MCP servers (but do not start them)
  • Register skills and commands (names and descriptions only)

On demand:

  • Full skill/agent content: loaded when the skill is invoked
  • MCP servers: started on first tool invocation
  • Hook scripts: executed when their trigger event fires
  • Heavy scripts: run only when their command is called

What this means in practice: A user with twenty installed packages sees the same startup time as a user with zero. The lockfile read is O(n) in package count but each package contributes only a few hundred bytes of metadata. Full content loading is amortized across the session — you only pay for what you use.

8. Registry protocol (optional, decentralized)

Section titled “8. Registry protocol (optional, decentralized)”

Registries solve discovery. But a mandatory central registry creates a single point of failure and a governance problem. ccpkg takes the decentralized approach.

A registry is a JSON index file. It can be hosted on GitHub Pages, S3, a personal web server, or any static file host. The format is defined in the spec, but hosting is up to the author.

No central authority required. Anyone can host a registry. Users configure which registries to query. A community-maintained “default” registry can emerge organically without being mandated by the format.

Trust signals in the index: Each registry entry can include:

  • SHA256 checksums for integrity verification
  • Author information and verification status
  • Download counts
  • Compatibility tags (Claude Code version, OS, etc.)
  • Last-updated timestamps

Discovery via search: /ccpkg:search queries configured registries, merges results, and presents them with trust signals. This is the experience gap between “browse GitHub repos” and “find the right package.”

9. MCP server deduplication at install time

Section titled “9. MCP server deduplication at install time”

When multiple packages bundle the same MCP server, the installer deduplicates at install time rather than requiring packages to declare shared dependencies or a shared MCP directory.

Why install-time dedup? Packages stay self-contained. Authors do not need to change anything. The dedup is transparent — the installer is smarter about what it writes to the host config. This preserves Principle #1 (self-contained) and Principle #8 (no inter-package deps).

Why not shared directory with refcounting? Reference counting introduces a new complexity vector. Crashes mid-uninstall corrupt counts. It breaks the “each plugin is self-contained in its directory” model hosts expect.

Why not a server_id manifest field? Requires schema change and author adoption. Existing packages would not benefit. The key_name + origin tuple provides sufficient identity without opt-in.

Identity model: (key_name, origin) tuple. Origin is derived from server mode: command string for Mode 1, bundle path for Mode 2, source URL for Mode 3. Version resolution: highest wins. User override: per-server or global.


ccpkg does not replace existing standards. It composes them.

  • MCP Specification: ccpkg packages MAY contain MCP server configurations. The .mcp.json template format is compatible with the standard MCP server configuration format, extended only with ${config.*} variable substitution.
  • Agent Skills Specification: Skills within ccpkg MUST conform to the Agent Skills specification (SKILL.md format, YAML frontmatter schema, progressive disclosure). ccpkg adds no extensions to the skill format itself.
  • mcpb Format: ccpkg supports embedding or referencing .mcpb bundles. The .mcpb format is used as-is — ccpkg is the outer envelope.

The ccpkg manager is itself a Claude Code plugin. This is not a gimmick — it validates the format by using it. The implementation uses:

  • Commands: /ccpkg:init, /ccpkg:pack, /ccpkg:install, /ccpkg:verify, /ccpkg:list, /ccpkg:update, /ccpkg:config, /ccpkg:search
  • Skills: Interactive init wizard (guides authors through manifest creation), package discovery and search
  • Hooks: integrity verification on install
  • Scripts: Node.js/TypeScript for ZIP handling, SHA256 checksum computation, lockfile resolution, cache management, registry queries

Node.js/TypeScript for scripts. Claude Code already runs on Node, so there is no additional runtime dependency. ZIP handling via built-in zlib and established libraries. SHA256 via Node’s crypto module.


These were open questions during the design phase. All have been resolved.

1. Version ranges vs pinned versions in lockfiles

Section titled “1. Version ranges vs pinned versions in lockfiles”

Decision: Manifest declares semver ranges, lockfile pins exact versions.

The npm model: manifest.json uses ranges like ^1.2.0 to express compatibility intent. ccpkg-lock.json pins to exact resolved versions like 1.2.3. This gives authors flexibility to express compatibility while users get deterministic, reproducible installs. The lockfile is the source of truth for what’s actually installed.

Decision: Manual only with optional outdated check.

/ccpkg:update is explicit and user-initiated. A separate /ccpkg:outdated command checks configured registries and reports available updates without applying them. No automatic updates, no startup checks. The user is always in control. This avoids the startup latency problem that motivated ccpkg in the first place.

Decision: Yes, symlink dev mode via /ccpkg:link.

/ccpkg:link ./path/to/my-plugin creates a symlink from the packages directory to a local directory. Changes to the source reflect immediately without re-packing. Modeled after npm link and lazy.nvim’s dir option. Essential for package authors iterating on skills, hooks, and commands. The lockfile records linked packages with a "source": "link" field so they are distinguishable from installed archives.

Decision: Namespace everything by package name.

All components are automatically prefixed by the package name. A skill review in package code-tools becomes /code-tools:review. A hook in linter-pack is registered under the linter-pack namespace. Conflicts are impossible by design. This matches how Claude Code plugins already namespace commands today.

Decision: Explicitly out of scope for v1.

Dependency resolution between packages adds significant complexity (version solving, ordering, conflict resolution) for limited benefit at this stage. The self-contained principle already requires that all components needed by a package live inside the archive. If a skill needs an MCP server, both ship in the same .ccpkg. This may be revisited in a future spec version if the ecosystem grows to the point where shared components across packages become a common need.

Decision: Consistent with mcpb — checksums in v1, signing deferred.

The mcpb format has no native signing or checksums, relying on source reputation and manual inspection. ccpkg already improves on this by including a checksum field (SHA-256) in the manifest for integrity verification. Formal cryptographic signing (GPG, sigstore, minisign) is deferred to a future spec version. The checksum field and /ccpkg:verify command provide baseline integrity verification that mcpb lacks, while keeping v1 simple and shippable.


  1. Finalize specification document — the formal spec (see Specification) defines the normative requirements using RFC 2119 language
  2. Implement ccpkg CLI prototype — Node.js/TypeScript, covering pack, install, verify, list
  3. Build the ccpkg plugin — skills + commands + hooks + scripts, self-referentially packaged
  4. Create example packages — at least three: a skills-only package, an MCP server package, and a full hybrid package
  5. Publish spec to GitHub Pages — the spec and this design document, rendered for web consumption
  6. Community feedback — propose the format to the Claude Code and broader AI assistant community