Summary
No way to programmatically start a session with a predefined prompt, let it execute, and capture structured results. This blocks automated benchmarks, regression tests, and A/B comparisons.
Proposed
opencode --headless \
--prompt "Fix the null pointer exception in src/parser.rs" \
--max-turns 20 \
--output-json result.json
Output: structured JSON with outcome, turns, tokens, files_modified, tools_used.
Use case
- Benchmark: "Does adding MCP tools improve quality?" — run same tasks with/without, compare
- CI: "Run these tasks, fail if quality drops below threshold"
- Research: reproducible evaluation of AI coding assistants
Summary
No way to programmatically start a session with a predefined prompt, let it execute, and capture structured results. This blocks automated benchmarks, regression tests, and A/B comparisons.
Proposed
opencode --headless \ --prompt "Fix the null pointer exception in src/parser.rs" \ --max-turns 20 \ --output-json result.jsonOutput: structured JSON with outcome, turns, tokens, files_modified, tools_used.
Use case