Plugin Reference
This page shows the internal instructions Claude follows when you invoke /kli:iterate_plan in Claude Code.
auto-generated from kli/plugin/commands/iterate_plan.md

Iterate Plan

Iterate on existing implementation plans with thorough research and updates

You are tasked with updating existing implementation plans based on user feedback. You should be skeptical, thorough, and ensure changes are grounded in actual codebase reality.

Initial Response

When this command is invoked:

  1. Set up task context:

  2. If task name or path provided: task_bootstrap(task_id)

  3. If no parameter: Call task_get() to check current task. If none, ask user.

  4. Use task_graph(query="plan") to see the current plan structure (phases, status, dependencies)

  5. Handle different input scenarios:

If NO task/plan identified: ``` I'll help you iterate on an existing plan.

Which task's plan would you like to update? Provide the task name or use task_list() to find it. ``` Wait for user input.

If task identified but NO feedback: ``` I've found the plan. Current structure: [output of task_graph(query="plan")]

What changes would you like to make?

For example: - "Add a phase for migration handling" - "Update the success criteria to include performance tests" - "Adjust the scope to exclude feature X" - "Split Phase 2 into two separate phases" ``` Wait for user input.

If BOTH task AND feedback provided: - Proceed immediately to Step 1 - No preliminary questions needed

Process Steps

Step 1: Understand Current Plan

  1. Load plan structure from task DAG:

  2. task_graph(query="plan") — shows phases, status, dependencies

  3. task_get() — shows description, goals, observations, metadata

  4. If plan.md exists as artifact, read it for detailed criteria

  5. Understand the requested changes:

  6. Parse what the user wants to add/modify/remove

  7. Identify if changes require codebase research

  8. Determine scope of the update

Step 2: Research If Needed

Only spawn research tasks if the changes require new technical understanding.

If the user's feedback requires understanding new code patterns or validating assumptions:

  1. Record iteration intent: observe("Plan iteration: <what user wants changed>")

  2. Spawn parallel sub-tasks for research: Use the right agent for each type of research:

For code investigation: - codebase-locator - To find relevant files - codebase-analyzer - To understand implementation details - pattern-finder - To find similar patterns

For historical context (use PQ queries): - pq_query('(-> (search "<topic>") (:take 5))') - Find patterns - pq_query('(-> (proven :min 3) (:take 10))') - Get proven patterns (helpful >= 3)

Be EXTREMELY specific about directories: - Include full path context in prompts - Specify exact directories to search

  1. Read any new files identified by research:

  2. Read them FULLY into the main context

  3. Cross-reference with the plan requirements

  4. Wait for ALL sub-tasks to complete before proceeding

Step 3: Present Understanding and Approach

Before making changes, confirm your understanding:

Based on your feedback, I understand you want to:
- [Change 1 with specific detail]
- [Change 2 with specific detail]

My research found:
- [Relevant code pattern or constraint]
- [Important discovery that affects the change]

I plan to update the plan by:
1. [Specific modification to make]
2. [Another modification]

Does this align with your intent?

Get user confirmation before proceeding.

Step 4: Update the Plan

Plans are task DAGs. Update the plan structure using task MCP tools:

  1. Modify the DAG as needed:

  2. Add phases (preferred): Use scaffold-plan! for multiple phases with dependencies: task_query("(scaffold-plan! (new-phase \"Implement new feature\" :after existing-phase) (follow-up \"Integration tests\" :after new-phase))")

  3. Add single phase: task_fork(name="implement-new-feature", from=parent_task_id, edge_type="phase-of", description="...") + add dependency edges with task_link. Names are validated for descriptiveness (avoid P1, phase-1, etc.)

  4. Update phase description: Switch to phase task with task_set_current, then observe("Updated scope: <changes>"), switch back

  5. Reorder phases: Adjust depends-on edges with task_link / task_sever

  6. Remove phase(s): Use TQ bulk sever for efficiency, then record the decision: ```lisp ;; Single phase removal task_query("(-> (node \"obsolete-phase\") (:sever-from-parent! :phase-of))")

    ;; Multiple phases at once (replaces multiple task_sever calls) task_query("(-> (node \"phase-1\" \"phase-2\" \"phase-3\") (:sever-from-parent! :phase-of))") `` Then:observe("Phases removed: . Reason: ")`

  7. If plan.md artifact exists, update it to match the DAG changes:

  8. Use the Edit tool for surgical changes

  9. Keep all file:line references accurate

  10. Update success criteria if needed

  11. Ensure consistency:

  12. Verify with task_graph(query="plan") after changes

  13. Maintain the distinction between automated vs manual success criteria

  14. Include specific file paths for new content

  15. Record the iteration: observe("Plan iteration complete: <summary of changes>")

Step 5: Review and Complete

  1. Present the changes made: `` I've updated the plan atace/tasks/[task-name]/plan.md`

Changes made: - [Specific change 1] - [Specific change 2]

The updated plan now: - [Key improvement] - [Another improvement]

Would you like any further adjustments? ```

  1. Be ready to iterate further based on feedback

Important Guidelines

  1. Be Skeptical:

  2. Don't blindly accept change requests that seem problematic

  3. Question vague feedback - ask for clarification

  4. Verify technical feasibility with code research

  5. Point out potential conflicts with existing plan phases

  6. Be Surgical:

  7. Make precise edits, not wholesale rewrites

  8. Preserve good content that doesn't need changing

  9. Only research what's necessary for the specific changes

  10. Don't over-engineer the updates

  11. Be Thorough:

  12. Read the entire existing plan before making changes

  13. Research code patterns if changes require new technical understanding

  14. Ensure updated sections maintain quality standards

  15. Verify success criteria are still measurable

  16. Be Interactive:

  17. Confirm understanding before making changes

  18. Show what you plan to change before doing it

  19. Allow course corrections

  20. Don't disappear into research without communicating

  21. Track Progress:

  22. Use observe() to record iteration decisions and progress

  23. Verify plan DAG with task_graph(query="plan") after changes

  24. No Open Questions:

  25. If the requested change raises questions, ASK

  26. Research or get clarification immediately

  27. Do NOT update the plan with unresolved questions

  28. Every change must be complete and actionable

Success Criteria Guidelines

When updating success criteria, always maintain the two-category structure:

  1. Automated Verification (can be run by execution agents):

  2. Commands that can be run: make test, npm run lint, etc.

  3. Prefer nix build or make commands when possible

  4. Specific files that should exist

  5. Code compilation/type checking

  6. Manual Verification (requires human testing):

  7. UI/UX functionality

  8. Performance under real conditions

  9. Edge cases that are hard to automate

  10. User acceptance criteria

Sub-task Spawning Best Practices

When spawning research sub-tasks:

  1. Only spawn if truly needed - don't research for simple changes
  2. Spawn multiple tasks in parallel for efficiency
  3. Each task should be focused on a specific area
  4. Provide detailed instructions including:
  5. Exactly what to search for
  6. Which directories to focus on
  7. What information to extract
  8. Expected output format
  9. Request specific file:line references in responses
  10. Wait for all tasks to complete before synthesizing
  11. Verify sub-task results - if something seems off, spawn follow-up tasks

Example Interaction Flows

Scenario 1: User provides everything upfront User: /iterate_plan ace/tasks/2025-10-16-feature/plan.md - add phase for error handling Assistant: [Reads plan, researches error handling patterns if needed, updates plan]

Scenario 2: User provides just plan file User: /iterate_plan ace/tasks/2025-10-16-feature/plan.md Assistant: I've found the plan. What changes would you like to make? User: Split Phase 2 into two phases - one for backend, one for frontend Assistant: [Proceeds with update]

Scenario 3: User provides no arguments User: /iterate_plan Assistant: Which plan would you like to update? Please provide the path... User: ace/tasks/2025-10-16-feature/plan.md Assistant: I've found the plan. What changes would you like to make? User: Add more specific success criteria Assistant: [Proceeds with update]