Skip to content

xiaolai/mecha.im

Repository files navigation

Mecha

An event-driven agentic workflow engine.

Mecha turns GitHub events into LLM tasks — one Go binary, YAML config, policy-controlled write-back.

GitHub webhook → match worker → dispatch prompt → policy filter → write back

What it does

  • Multi-LLM — Claude, Codex, Gemini in Docker containers. Ollama and OpenAI-compatible via adapters. Switch models in one YAML line.
  • Event-driven — GitHub and GitLab webhooks trigger workers automatically. Generic webhooks for anything else.
  • Policy-controlled — decide what each worker can write back: comments, labels, status checks, commit suggestions. Block what you don't want.
  • Self-hosted — single binary, runs on your machine or server. No cloud dependency. Your code stays on your infra.

Quick look

Define a worker:

name: pr-reviewer
docker:
  image: mecha-worker-claude:latest
  token: claude.default
  env:
    CLAUDE_MODEL: claude-sonnet-4-6
    CLAUDE_EFFORT: high
events:
  - source: github
    on: [pull_request.opened, pull_request.synchronize]
    prompt: "Review this PR for security issues.\n\n{{.diff}}"
policy:
  comment: { allow: true, max_length: 2000 }
  labels: { allow: true }
  status: { allow: true }
  commit: { allow: false }

Start it:

mecha worker add workers/pr-reviewer.yml
mecha worker start pr-reviewer
mecha serve --addr 0.0.0.0:8080

Every PR now gets an automated security review.

Install

# macOS (Apple Silicon)
curl -L https://github.com/xiaolai/mecha.im/releases/latest/download/mecha-darwin-arm64.tar.gz | tar xz
sudo mv mecha /usr/local/bin/

# Linux (x86_64)
curl -L https://github.com/xiaolai/mecha.im/releases/latest/download/mecha-linux-amd64.tar.gz | tar xz
sudo mv mecha /usr/local/bin/

Or build from source (requires Go 1.26+): git clone + make build.

Docker 28+ needed for container workers only. Adapter workers (Ollama, vLLM) need no Docker.

Three worker types

Type What Docker needed?
Managed LLM CLI in a Docker container Yes
Adapter In-process bridge to Ollama, vLLM, any OpenAI-compatible API No
Unmanaged Your existing HTTP endpoint No

Architecture

Four nouns. One pipeline.

Event.arrive → Event.match → Task.create → Task.dispatch → Policy.filter → Task.complete
  • Event — something happened (webhook)
  • Worker — takes a prompt, returns a result
  • Task — an event matched to a worker
  • Policy — what the result is allowed to contain

Design principle: dumb pipeline, smart step, policy gate. The pipeline is deterministic. The LLM is the only smart part. Policy is the only governance checkpoint.

Docs

mecha.im — full documentation including installation, worker config, secrets, events, policy, CLI reference, and API.

Status

Under active development. Core pipeline is implemented and working:

  • Worker lifecycle (Docker + adapters + unmanaged)
  • Task dispatch with health checks and recovery
  • GitHub + GitLab + generic webhook sources
  • Event hydration (PR diffs, file lists)
  • Policy-filtered write-back (comments, labels, status, commit suggestions)
  • Disposable (one-shot) containers
  • SQLite persistence with WAL mode

License

ISC

About

Mecha: Run an army of bots on your own machines.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors