Creating Pipelines
Design multi-agent workflows with branching logic, error handling, and resumability.
🔄 What's a Pipeline?
A pipeline defines a multi-agent workflow — an ordered sequence of agent steps where each step's output feeds into the next. Pipelines support branching, loops, and resumability.
- Multi-step workflows: Break complex tasks into manageable agent steps
- Quality gates: Review and validate output between steps
- Error handling: Define what happens when a step fails
- Resumability: Save progress and resume from any step
- Composable agents: Reuse agents across different pipelines
📐 Pipeline Format
Pipelines are defined in YAML with a clear step-by-step structure:
name: my-pipeline
version: 1.0.0
description: "What this pipeline accomplishes"
trigger: "natural language pattern *"
steps:
- name: step-one
agent: first-agent
input: "what this step receives"
output: "what this step produces"
on-failure: halt
- name: step-two
agent: second-agent
input: "output from step:step-one"
output: "intermediate result"
on-failure: loop
loop-target: step-one
max-iterations: 3
- name: step-three
agent: third-agent
input: "output from step:step-two"
output: "final result"
on-failure: halt
resumable: true
state-file: ".omniskill/pipeline-state.json"
🔀 The Complexity Router Pre-Step
Before any pipeline executes, the complexity-router automatically runs as a pre-step:
1. Classification — Analyzes the request complexity
(trivial → simple → moderate → complex → expert)
2. Model Selection — Chooses the optimal model tier
(fast/cheap → standard → premium)
3. Pipeline Selection — Determines if this is the right
pipeline or if a simpler/more complex one should be used
This routing happens transparently. The router uses signals from:
- Task scope and dependencies
- Required domain expertise
- Expected output complexity
- Time constraints
The complexity router ensures that trivial tasks use fast/cheap models while complex tasks get routed to premium models—optimizing both cost and quality without manual configuration.
See
skills/complexity-router/resources/complexity-signals.md
for the full classification criteria.
⚙️ Step Configuration
Basic Step Properties
- name: unique-step-name # Unique identifier
agent: agent-name # Which agent executes this step
input: "input description" # What the agent receives
output: "output description" # What the agent produces
on-failure: halt # Error handling strategy
on-failure Options
| Option | Behavior | Use When |
|---|---|---|
halt |
Stop the pipeline; report failure | Errors are unrecoverable |
loop |
Go back to loop-target step |
Retry after fixing upstream issue |
skip |
Skip this step; continue to next | Step is optional |
retry |
Retry this same step | Transient failures |
Input References
Use step:name syntax to reference output from previous
steps:
# Single input reference
input: "Spec from step:specify"
# Multiple input references
input: "Code from step:implement AND spec from step:specify"
# Conditional input
input: "If step:validate passed, use its output; else use step:implement output"
A step can only reference outputs from steps that come before it in the pipeline. The validator will catch circular dependencies.
🔁 Loop Configuration
When a step can fail and needs to retry from an earlier point:
steps:
- name: design
agent: ui-designer
input: "User requirements"
output: "UI design artifacts"
on-failure: halt
- name: implement
agent: implementer
input: "Design from step:design"
output: "Implementation code"
on-failure: loop
loop-target: design # Go back to design step
max-iterations: 3 # Maximum 3 attempts
loop-message: "Implementation failed. Revising design..."
- name: test
agent: tester
input: "Code from step:implement"
output: "Test results"
on-failure: loop
loop-target: implement # Go back to implement only
max-iterations: 5
Loops are powerful but can be expensive. Set
max-iterations carefully to avoid infinite loops
while allowing enough attempts to succeed.
🌳 Branching Logic
Create conditional paths based on step outcomes:
steps:
- name: analyze
agent: analyzer
input: "Codebase"
output: "Analysis report"
on-failure: halt
- name: decide
agent: decision-maker
input: "Report from step:analyze"
output: "Decision: refactor OR optimize OR accept"
on-failure: halt
# Branch A: Refactor path
- name: refactor
agent: refactorer
input: "Code and report from step:analyze"
output: "Refactored code"
condition: "step:decide output == 'refactor'"
on-failure: halt
# Branch B: Optimize path
- name: optimize
agent: optimizer
input: "Code and report from step:analyze"
output: "Optimized code"
condition: "step:decide output == 'optimize'"
on-failure: halt
# Merge point: Both paths converge here
- name: validate
agent: validator
input: "Code from step:refactor OR step:optimize"
output: "Validation results"
on-failure: halt
💾 Resumability
Enable pipeline resumption from any step:
resumable: true
state-file: ".omniskill/my-pipeline-state.json"
# State tracking configuration
state-tracking:
save-frequency: "after-each-step"
include-artifacts: true
compress: true
Using Resumable Pipelines
$ # Pipeline interrupted? Resume from where it left off $ python scripts/resume-pipeline.py my-pipeline $ # Or resume from a specific step $ python scripts/resume-pipeline.py my-pipeline --from-step implement $ # View pipeline state $ python scripts/pipeline-status.py my-pipeline
- Long-running pipelines (>30 minutes)
- Pipelines with expensive steps (API calls, builds, etc.)
- Pipelines that require human approval at certain steps
- Development/debugging of pipeline logic
🎯 Example: SDD Pipeline
The Spec-Driven Development pipeline is a comprehensive example:
name: sdd-pipeline
version: 1.0.0
description: "Spec-Driven Development: Write spec, implement, review"
trigger: "build feature * from scratch"
steps:
- name: specify
agent: spec-writer
input: "Feature description from user"
output: "spec.md — Comprehensive specification"
on-failure: halt
- name: review-spec
agent: reviewer
input: "Spec from step:specify"
output: "Approval or revision request"
on-failure: loop
loop-target: specify
max-iterations: 3
loop-message: "Spec needs revision. Iterating..."
- name: implement
agent: implementer
input: "Approved spec from step:review-spec"
output: "Implementation code in src/"
on-failure: loop
loop-target: specify
max-iterations: 2
loop-message: "Implementation revealed spec gaps. Revising spec..."
- name: review-impl
agent: reviewer
input: "Code from step:implement AND spec from step:specify"
output: "Compliance report"
on-failure: loop
loop-target: implement
max-iterations: 5
loop-message: "Implementation doesn't match spec. Fixing..."
- name: finalize
agent: finalizer
input: "Approved implementation from step:review-impl"
output: "Final package with docs"
on-failure: halt
resumable: true
state-file: ".omniskill/sdd-pipeline-state.json"
Pipeline Flow Visualization
↓
spec-writer
↓
reviewer (spec)
↓ (if approved)
implementer
↓
reviewer (impl)
↓ (if passes)
finalizer
↓
Complete ✓
✅ Validation
Validate your pipeline before deploying:
$ python scripts/validate.py pipelines/my-pipeline.yaml
The validator checks:
- All referenced agents exist
- No circular step dependencies
- Loop targets are valid steps
- Input references point to previous steps
- Trigger patterns are unique
- YAML syntax is valid
✨ Pipeline Best Practices
1. Clear Step Names
Use descriptive, action-oriented names: specify,
implement, review-spec
2. Appropriate Error Handling
Don't use on-failure: skip for critical steps. Use
halt for unrecoverable errors.
3. Bounded Loops
Always set max-iterations to prevent infinite loops.
Consider the cost of each iteration.
4. Quality Gates
Insert review steps at key points (after spec, after implementation, before deployment).
5. Human-in-the-Loop
For critical decisions, include manual approval steps rather than full automation.
6. State Management
For long pipelines, enable resumability. For short ones, keep it simple.
7. Composable Agents
Design agents to be reusable across multiple pipelines, not pipeline-specific.
🔄 Existing Pipelines
OMNISKILL includes several pre-built pipelines:
| Pipeline | Trigger | Steps | Use Case |
|---|---|---|---|
sdd-pipeline |
"build feature X" | 5 steps | Spec-driven feature development |
ux-pipeline |
"design feature X" | 7 steps | Complete UX workflow |
debug-pipeline |
"fix bug X" | 4 steps | Investigate, fix, test, verify |
skill-factory |
"create skill for X" | 6 steps | Interactive skill creation |
See the pipelines directory for all available pipelines and their source code.