Understanding Patterns

The playbook is kli's long-term memory — a collection of patterns learned from previous work. Patterns capture what worked, what didn't, and how to approach specific types of problems. Claude consults the playbook automatically when starting tasks and updates it after reflecting on completed work.

What Patterns Look Like

A pattern is a short, actionable piece of guidance tagged with a domain and scored by how often it has been helpful or harmful. For example:

[lisp-000042] helpful=5 harmful=0 ::
When editing defstruct forms, always reload dependents —
SBCL doesn't propagate slot changes to compiled callers.

Patterns are prescriptive ("do X when Y") rather than descriptive ("X exists"). They capture the kind of knowledge that saves time on the second encounter.

How Patterns Emerge

  1. During work, Claude records observations in the task event log — things it discovers, constraints it hits, approaches that succeed or fail

  2. During reflection (/kli:reflect), Claude reviews those observations and promotes the transferable ones to patterns. Not every observation qualifies — only insights that would help in future, unrelated tasks

  3. Over time, patterns accumulate feedback. When a pattern helps Claude complete a task successfully, it gets a helpful vote. When it leads astray, it gets a harmful vote. High-scoring patterns surface more readily; low-scoring ones fade

The Litmus Test

Not every observation becomes a pattern. To be promoted, an insight must pass all three criteria:

  • Transferable — Useful beyond the original task
  • Actionable — Provides specific guidance, not just information
  • Prescriptive — Says what to do (or avoid), not just what exists

System-specific facts stay as observations on the task. Only insights that would help on a different project in a different context become patterns.

How Patterns Help You

When Claude starts working on a task, it queries the playbook for patterns relevant to the current domain and problem. This happens automatically — you'll see Claude reference activated patterns in its reasoning.

The effect is cumulative:

  • First time working in a new area, Claude relies on general knowledge
  • After a few tasks, patterns from your specific codebase and conventions start activating
  • Over many sessions, Claude develops a working knowledge of your project's idioms, pitfalls, and proven approaches

Triggering Reflection

Run /kli:reflect after completing a piece of work. Claude will:

  1. Review the session's observations
  2. Identify insights that pass the litmus test
  3. Create or update patterns in the playbook
  4. Report what was learned

Reflection is most valuable after tasks that involved debugging, discovering non-obvious constraints, or finding approaches that worked better than expected.

Domains

Patterns are tagged with domains like lisp, nix, web, or ops. Domain tags help Claude activate the right patterns — when you're working on Nix code, Nix patterns surface; when you're working on Lisp, Lisp patterns surface. The playbook-activate hook detects domains from your prompts and triggers pattern retrieval automatically.

Pattern Lifecycle

The full lifecycle of a pattern:

  1. Discovery — An insight surfaces during implementation
  2. Observation — Claude records it in the task's event stream
  3. Promotion — During /kli:reflect, observations that pass the litmus test become patterns
  4. Activation — Retrieved via semantic search when Claude starts relevant new work
  5. Feedback — Marked helpful or harmful based on application outcomes
  6. Evolution — Content updated based on accumulated evidence

Patterns are never deleted. If harmful votes exceed helpful votes, a pattern is deprioritized rather than removed — preserving the record of what didn't work.

Background

kli's workflow draws on two bodies of work. The research → plan → implement structure comes from Dex Horthy's advanced context engineering methodology at HumanLayer, which established that dividing AI coding work into sequential phases — each producing a compacted artifact as input for the next — dramatically improves output quality in large codebases. kli extends this with a fourth phase, reflect, which closes the feedback loop by promoting observations into reusable patterns.

The playbook concept itself is adapted from the Agentic Context Engineering paper (Stanford, SambaNova, UC Berkeley, 2025), which established the methodology of agents writing observations between phases of work. kli was first used with this methodology in October 2025 on a production project, where a file-based observation system accumulated 230 tasks and 117 handoff documents before hitting scalability limits.

kli's playbook system extends the original methodology with event-sourced task state (CRDT-based merging for safe parallel sessions), helpful/harmful scoring that lets patterns fade rather than requiring manual curation, and hybrid retrieval combining semantic search with spreading activation over a co-application graph.