Skip to content

feat: add OpenCode as alternative batch review runner#504

Closed
milimat0810 wants to merge 1 commit intopeteromallet:mainfrom
milimat0810:feature/opencode-runner
Closed

feat: add OpenCode as alternative batch review runner#504
milimat0810 wants to merge 1 commit intopeteromallet:mainfrom
milimat0810:feature/opencode-runner

Conversation

@milimat0810
Copy link

Problem

The batch review pipeline (desloppify review --run-batches) only supports Codex as a runner. There is no way to use OpenCode as the execution engine, even though the infrastructure (NDJSON stream parsing, subprocess management, retry/stall detection) is largely runner-agnostic.

Fix

Add --runner opencode support to the batch review pipeline. When selected, batches are executed via opencode run --format json subprocesses instead of Codex, with results extracted from the NDJSON stream and merged through the same trusted-import pipeline.

Changes

  • runner_process.py (new): run_opencode_batch implementation — builds OpenCode CLI invocations, supports warm-server attach via DESLOPPIFY_OPENCODE_ATTACH, handles NDJSON result extraction
  • orchestrator.py: wire runner dispatch — _build_batch_run_deps now selects run_opencode_batch or run_codex_batch based on args.runner
  • CLI parsing: add --runner argument with codex/opencode choices
  • scope.py: validate_runner() accepts both runner names, SUPPORTED_BLIND_REVIEW_RUNNERS updated
  • runner_process_impl/io.py: add _extract_text_from_opencode_json_stream() for NDJSON parsing
  • runner_process_impl/attempts.py, types.py, attempt_success.py: extend subprocess handling to support OpenCode process lifecycle
  • runner_failures.py: classify OpenCode-specific failure modes
  • importing/policy.py: allow OpenCode runner in blind review import policy
  • packet/build.py, prepare.py, helpers.py: minor adjustments for runner-agnostic packet preparation
  • docs/OPENCODE.md: usage documentation covering basic and warm-server modes
  • Tests: ~1500 lines of new/updated test coverage across review_commands_cases.py, test_runner_internals.py, review_commands_runner_cases.py, and batch split/guard tests

Usage

desloppify review --run-batches --runner opencode --parallel --scan-after-import

Optional warm-server mode to avoid MCP cold-start overhead:

opencode serve --port 4096 &
export DESLOPPIFY_OPENCODE_ATTACH=http://localhost:4096
desloppify review --run-batches --runner opencode --parallel --scan-after-import

Verification

  • pytest desloppify/tests/review/test_runner_internals.py -q
  • pytest desloppify/tests/review/review_commands_cases.py -q
  • pytest desloppify/tests/review/review_commands_runner_cases.py -q
  • pytest desloppify/tests/commands/review/test_review_process_guards_direct.py -q
  • pytest desloppify/tests/commands/review/test_review_runner_batch_split_direct.py -q

Add --runner opencode support to desloppify review --run-batches,
enabling OpenCode subprocess execution as an alternative to Codex.

- Add runner_process.py with run_opencode_batch implementation
- Wire runner dispatch in orchestrator.py based on args.runner
- Add opencode CLI argument parsing and validation
- Update runner failures and IO handling for opencode format
- Add OPENCODE.md documentation
- Update test cases for opencode runner
@peteromallet
Copy link
Owner

Hey @milimat0810 — thanks for putting this together! OpenCode as an alternative runner is a feature we genuinely want, and you clearly put real work into this (the NDJSON stream parser, retry/stall handling, warm-server attach, and 1500 lines of new tests are all solid). The core logic is sound.

That said, the implementation needs some rework before we can merge it. Here's what we'd need changed:

1. Remove the re-export wrapper shim in runner_process.py

The ~50 lines of codex function delegation (codex_batch_command, run_codex_batch, run_followup_scan, etc.) that re-export from codex_batch.py should be removed. This project has a deliberate policy against backward-compat wrapper shims — we're an internal tool, not a library, so import paths can just change. Keep only the opencode-specific functions in runner_process.py (or rename it to runner_opencode.py for clarity), and update imports in orchestrator.py to pull codex functions directly from codex_batch.

2. Fix the dead stdout_text_observer code path

The stdout_text_observer parameter is added to _run_via_popen's signature but is never threaded through _start_stream_threads to _drain_stream. This means live NDJSON observation during popen runs is dead code — it silently does nothing. Either thread it properly through _start_stream_threads_drain_stream (for the stdout thread only), or remove the parameter until it's wired up.

3. Separate formatting changes from logic changes

Roughly 60% of the diff is formatting changes (line wrapping, quote style, whitespace) mixed in with the actual feature. This makes review significantly harder. Please either revert the formatting changes and submit them as a separate PR, or drop them entirely.

4. Simplify the prepare.py monkeypatch detection

The module-level _ORIGINAL snapshots with is-check comparisons are more complex than needed. A simpler approach: unconditionally forward the current module references (e.g., compute_narrative_fn=narrative_mod.compute_narrative). This always passes whatever's currently bound — which during tests is the patched version — without needing to detect whether patching happened. Same result, much less machinery.

5. The build_holistic_packet injectable parameters

The three new optional parameters (review_prepare_options_cls, compute_narrative_fn, narrative_context_cls) add indirection for a single use case. Consider whether the unconditional-forwarding approach from point 4 eliminates the need for these entirely.


I'm closing this PR for now, but please don't be discouraged — the feature is wanted and the hard parts (NDJSON parsing, retry logic, the actual runner implementation) are good. A v2 that addresses the above would be very welcome. Happy to discuss any of these points if you have questions!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants