Sticky Codes

Limits & costs

sticky is intentionally lightweight: a single language-model call over a small, carefully-chosen slice of a repository. Everything it does is bounded so a runaway generation can't happen.

Snapshot limits

The repo snapshot — the thing the model actually sees — is capped on three axes:

  • 40 files maximum, picked by the scoring heuristic described in How it works.
  • 12 KB per file. Files larger than this are skipped, not truncated. A 200 KB minified bundle wouldn't be useful to the model anyway.
  • 120 KB totalacross all selected files. Once the budget's gone, the rest of the queue is dropped even if it had room in the file count.
  • The full file tree is included separately as just a list of paths, capped at 500 entries. The model uses this to talk about layout even when it hasn't read those files.

What this means in practice

For small and medium repos — anything up to a few hundred source files — the snapshot covers most of the meaningful surface. The wiki tends to be specific and grounded.

For large repos (think monorepos, frameworks with thousands of files), the snapshot captures the entry points and top-level structure but misses leaves. The wiki for a big repo will read more like an architectural overview than an exhaustive guide. That's a feature: the goal is a readable orientation, not a generated mirror of the codebase.

For very small repos — single-file gists, demo repos — the snapshot is just "all of it", and the wiki ends up being a careful explanation of a short program.

Function and request limits

  • 300 seconds is the maximum execution time for /api/generate (set via maxDuration on the route). Generations comfortably finish in 30–90 seconds; the long ceiling is a safety net.
  • No rate limiting yet in this version. The endpoint is open. Heavy public abuse would show up on the AI bill before it broke anything else; rate limiting is on the roadmap.
  • No authentication.Anyone can submit a public repo. Private repos aren't supported (and would fail at the Octokit fetch step regardless).

Cost shape

Every wiki is one streaming call to Claude Sonnet 4.6 through the Vercel AI Gateway. Input is the snapshot (small — under 120 KB plus the file tree); output is a wiki (usually 4–10 KB of Markdown).

In practice a single generation costs cents, not dollars. The Library is the primary cost-amortization mechanism: a listed wiki for facebook/react is generated once and then served from the database to everyone who pastes that URL afterward.

Unlisted submissions skip the dedupe and each pay for their own generation. That's intentional — unlisted is for repos you don't want indexed, and the price of that privacy is a separate generation.

What the model can and can't see

It can see:

  • The 40 selected file contents.
  • The repo's file tree (up to 500 paths).
  • Repo metadata: description, primary language, topics, default branch name, star count.

It cannot see:

  • Commit history, PRs, issues, or releases.
  • Files larger than 12 KB.
  • Anything inside dependency or build directories (node_modules, dist, .next, vendor, etc.).
  • Binary files, lock files, or anything with an extension outside the allowlist.
  • External documentation sites, ADRs that live elsewhere, or wikis on GitHub.

The system prompt explicitly tells the model to say "unclear from the provided files" rather than guess. When it doesn't, that's a prompt bug — please see the FAQ.


Next: Privacy & data— what gets stored and what doesn't.