Skip to content

Latest commit

 

History

History
217 lines (159 loc) · 6.16 KB

File metadata and controls

217 lines (159 loc) · 6.16 KB

Architecture Overview

openTiger is an orchestration system that continues autonomous execution using multiple agents and state tables.

Related:

Table of Contents

0. Runtime Control Loop (Overview)

flowchart LR
  R[Requirement / Issues / PRs] --> P[Planner]
  P --> T[(tasks)]
  T --> D[Dispatcher]
  D --> W[Worker / Tester / Docser]
  W --> RUN[(runs/artifacts)]
  RUN --> J[Judge]
  J --> T
  T --> C[Cycle Manager]
  C --> T
  C --> P
Loading

This loop prioritizes "never stopping"; on failure, recovery strategy is switched via state transition.

TigerResearch is implemented as a planner-first plugin specialization on top of the same loop:

  • entry via POST /plugins/tiger-research/jobs
  • planner decomposition via --research-job
  • runtime execution via tasks.kind=research
  • convergence via Cycle Manager + Judge

Reading Order for Incident Investigation (Common Lookup Path)

After understanding the architecture, when investigating incidents, tracing in the order state vocabulary -> transition -> owner -> implementation is shortest.

  1. Confirm state vocabulary in state-model
  2. Check transitions and recovery paths in flow
  3. Run API procedures and operational shortcuts in operations
  4. Identify owning agent and implementation tracing path in agent/README

1. Components

Service Layer (API / @openTiger/api)

  • Dashboard backend
  • Config management (/config)
  • System control (/system/*)
  • Read APIs (/tasks, /runs, /agents, /plans, /judgements, /logs)

Planning Layer (Planner / @openTiger/planner)

  • Generate task plans from requirement / issue
  • Dependency normalization
  • Policy application
  • Documentation gap detection
  • Details: agent/planner

Dispatch Control Layer (Dispatcher / @openTiger/dispatcher)

  • Select queued tasks
  • Acquire lease
  • Assign execution agents
  • Process / Docker startup
  • Details: agent/dispatcher

Execution Layer (Worker / Tester / Docser / @openTiger/worker)

  • LLM execution (opencode, claude_code, or codex)
  • Change verification (commands + policy)
  • Commit/push/PR creation (git mode)
  • Recovery branching on failure
  • Details: agent/worker, agent/tester, agent/docser

Judgement Layer (Judge / @openTiger/judge)

  • Evaluate successful runs (CI / policy / LLM)
  • Approve / request_changes decision
  • Merge / retry / autofix task creation
  • Details: agent/judge

Convergence Layer (Cycle Manager / @openTiger/cycle-manager)

  • Cleanup loop
  • failed/blocked recovery
  • Issue backlog sync
  • Replan decision
  • Details: agent/cycle-manager

Dashboard Layer (Dashboard / @openTiger/dashboard)

  • UI for startup/config/state monitoring
  • Process start/stop
  • Task/run/judgement/log display

TigerResearch Subsystem (Cross-Cutting)

  • Query entry and job lifecycle API (/plugins/tiger-research/*)
  • Planner-first claim decomposition
  • Claim-level parallel collection/challenge/write tasks
  • Research quality convergence loop
  • Full UI observability from dashboard plugins section

2. Data Stores

Persistent Store (PostgreSQL)

Main tables:

  • tasks
  • runs
  • artifacts
  • leases
  • events
  • agents
  • cycles
  • config
  • TigerResearch plugin tables (plugins/tiger-research/src/db.ts)

Message Queue (Redis / BullMQ)

  • Task queue
  • Dead-letter queue
  • Worker concurrency/lock control

3. High-Level Execution Flow

  1. Planner creates tasks (queued)
  2. Dispatcher acquires lease and moves to running
  3. Worker/Tester/Docser execute and verify
  4. On success: blocked(awaiting_judge) or done
  5. Judge evaluates and moves to done / retry / rework
  6. Cycle Manager continues recovery and replanning

TigerResearch path:

  1. API creates plugin job row (research_jobs)
  2. Planner decomposes query to claims
  3. Dispatcher/Worker execute research tasks in parallel
  4. Cycle Manager drives collect/challenge/write/rework
  5. Judge applies research quality decision (when enabled)

Details in flow.

4. State Design Characteristics

  • Explicit blocked reason
    • awaiting_judge
    • quota_wait
    • needs_rework
    • issue_linking (for Planner internal coordination)
  • Duplicate execution prevention
    • lease
    • runtime lock
    • Judge idempotency (judgedAt, judgementVersion)

5. Modes and Execution Environment

  • Repository mode
    • git / local
  • Judge mode
    • git / local / auto
  • Execution environment
    • host (process)
    • sandbox (docker)

Details in mode and execution-mode.

6. Plugin Platform Architecture (Manifest v1)

Plugin integration is standardized through PluginManifestV1 and a shared loader in packages/plugin-sdk.

High-level loading model:

  1. Core discovers plugin packages (plugins/<id>/index.ts)
  2. Loader validates manifest compatibility (pluginApiVersion)
  3. Loader resolves dependency order (requires)
  4. Enabled plugins are mounted into each agent through hooks

Activation source:

  • ENABLED_PLUGINS (CSV)

Runtime inventory:

  • GET /plugins returns plugin status (enabled / disabled / incompatible / error)

Dashboard behavior:

  • Plugin route modules are discovered with import.meta.glob
  • New plugin package modules require dashboard rebuild
  • Enabled/disabled filtering is applied during startup/runtime bootstrapping

DB behavior:

  • Core migrations run first
  • Plugin migrations run in dependency order
  • Migration state is persisted to guarantee idempotent re-runs