Iterate Plan
Iterate on existing implementation plans with thorough research and updates
You are tasked with updating existing implementation plans based on user feedback. You should be skeptical, thorough, and ensure changes are grounded in actual codebase reality.
Initial Response
When this command is invoked:
Set up task context:
If task name or path provided:
task_bootstrap(task_id)If no parameter: Call
task_get()to check current task. If none, ask user.Use
task_graph(query="plan")to see the current plan structure (phases, status, dependencies)Handle different input scenarios:
If NO task/plan identified: ``` I'll help you iterate on an existing plan.
Which task's plan would you like to update? Provide the task name or use task_list() to find it. ``` Wait for user input.
If task identified but NO feedback: ``` I've found the plan. Current structure: [output of task_graph(query="plan")]
What changes would you like to make?
For example: - "Add a phase for migration handling" - "Update the success criteria to include performance tests" - "Adjust the scope to exclude feature X" - "Split Phase 2 into two separate phases" ``` Wait for user input.
If BOTH task AND feedback provided: - Proceed immediately to Step 1 - No preliminary questions needed
Process Steps
Step 1: Understand Current Plan
Load plan structure from task DAG:
task_graph(query="plan")— shows phases, status, dependenciestask_get()— shows description, goals, observations, metadataIf plan.md exists as artifact, read it for detailed criteria
Understand the requested changes:
Parse what the user wants to add/modify/remove
Identify if changes require codebase research
Determine scope of the update
Step 2: Research If Needed
Only spawn research tasks if the changes require new technical understanding.
If the user's feedback requires understanding new code patterns or validating assumptions:
Record iteration intent:
observe("Plan iteration: <what user wants changed>")Spawn parallel sub-tasks for research: Use the right agent for each type of research:
For code investigation: - codebase-locator - To find relevant files - codebase-analyzer - To understand implementation details - pattern-finder - To find similar patterns
For historical context (use PQ queries):
- pq_query('(-> (search "<topic>") (:take 5))') - Find patterns
- pq_query('(-> (proven :min 3) (:take 10))') - Get proven patterns (helpful >= 3)
Be EXTREMELY specific about directories: - Include full path context in prompts - Specify exact directories to search
Read any new files identified by research:
Read them FULLY into the main context
Cross-reference with the plan requirements
Wait for ALL sub-tasks to complete before proceeding
Step 3: Present Understanding and Approach
Before making changes, confirm your understanding:
Based on your feedback, I understand you want to:
- [Change 1 with specific detail]
- [Change 2 with specific detail]
My research found:
- [Relevant code pattern or constraint]
- [Important discovery that affects the change]
I plan to update the plan by:
1. [Specific modification to make]
2. [Another modification]
Does this align with your intent?
Get user confirmation before proceeding.
Step 4: Update the Plan
Plans are task DAGs. Update the plan structure using task MCP tools:
Modify the DAG as needed:
Add phases (preferred): Use
scaffold-plan!for multiple phases with dependencies:task_query("(scaffold-plan! (new-phase \"Implement new feature\" :after existing-phase) (follow-up \"Integration tests\" :after new-phase))")Add single phase:
task_fork(name="implement-new-feature", from=parent_task_id, edge_type="phase-of", description="...")+ add dependency edges withtask_link. Names are validated for descriptiveness (avoidP1,phase-1, etc.)Update phase description: Switch to phase task with
task_set_current, thenobserve("Updated scope: <changes>"), switch backReorder phases: Adjust
depends-onedges withtask_link/task_severRemove phase(s): Use TQ bulk sever for efficiency, then record the decision: ```lisp ;; Single phase removal task_query("(-> (node \"obsolete-phase\") (:sever-from-parent! :phase-of))")
;; Multiple phases at once (replaces multiple task_sever calls) task_query("(-> (node \"phase-1\" \"phase-2\" \"phase-3\") (:sever-from-parent! :phase-of))") ``
Then:observe("Phases removed:. Reason: ")` If plan.md artifact exists, update it to match the DAG changes:
Use the Edit tool for surgical changes
Keep all file:line references accurate
Update success criteria if needed
Ensure consistency:
Verify with
task_graph(query="plan")after changesMaintain the distinction between automated vs manual success criteria
Include specific file paths for new content
Record the iteration:
observe("Plan iteration complete: <summary of changes>")
Step 5: Review and Complete
- Present the changes made:
``
I've updated the plan atace/tasks/[task-name]/plan.md`
Changes made: - [Specific change 1] - [Specific change 2]
The updated plan now: - [Key improvement] - [Another improvement]
Would you like any further adjustments? ```
- Be ready to iterate further based on feedback
Important Guidelines
Be Skeptical:
Don't blindly accept change requests that seem problematic
Question vague feedback - ask for clarification
Verify technical feasibility with code research
Point out potential conflicts with existing plan phases
Be Surgical:
Make precise edits, not wholesale rewrites
Preserve good content that doesn't need changing
Only research what's necessary for the specific changes
Don't over-engineer the updates
Be Thorough:
Read the entire existing plan before making changes
Research code patterns if changes require new technical understanding
Ensure updated sections maintain quality standards
Verify success criteria are still measurable
Be Interactive:
Confirm understanding before making changes
Show what you plan to change before doing it
Allow course corrections
Don't disappear into research without communicating
Track Progress:
Use
observe()to record iteration decisions and progressVerify plan DAG with
task_graph(query="plan")after changesNo Open Questions:
If the requested change raises questions, ASK
Research or get clarification immediately
Do NOT update the plan with unresolved questions
Every change must be complete and actionable
Success Criteria Guidelines
When updating success criteria, always maintain the two-category structure:
Automated Verification (can be run by execution agents):
Commands that can be run:
make test,npm run lint, etc.Prefer
nix buildormakecommands when possibleSpecific files that should exist
Code compilation/type checking
Manual Verification (requires human testing):
UI/UX functionality
Performance under real conditions
Edge cases that are hard to automate
User acceptance criteria
Sub-task Spawning Best Practices
When spawning research sub-tasks:
- Only spawn if truly needed - don't research for simple changes
- Spawn multiple tasks in parallel for efficiency
- Each task should be focused on a specific area
- Provide detailed instructions including:
- Exactly what to search for
- Which directories to focus on
- What information to extract
- Expected output format
- Request specific file:line references in responses
- Wait for all tasks to complete before synthesizing
- Verify sub-task results - if something seems off, spawn follow-up tasks
Example Interaction Flows
Scenario 1: User provides everything upfront
User: /iterate_plan ace/tasks/2025-10-16-feature/plan.md - add phase for error handling
Assistant: [Reads plan, researches error handling patterns if needed, updates plan]
Scenario 2: User provides just plan file
User: /iterate_plan ace/tasks/2025-10-16-feature/plan.md
Assistant: I've found the plan. What changes would you like to make?
User: Split Phase 2 into two phases - one for backend, one for frontend
Assistant: [Proceeds with update]
Scenario 3: User provides no arguments
User: /iterate_plan
Assistant: Which plan would you like to update? Please provide the path...
User: ace/tasks/2025-10-16-feature/plan.md
Assistant: I've found the plan. What changes would you like to make?
User: Add more specific success criteria
Assistant: [Proceeds with update]