Goal: Keep deployments on a single host dead-simple, explicit, and reliable — without adopting a full orchestrator.
Use the installation script for an interactive setup:
curl -fsSL https://raw.githubusercontent.com/felipecsl/hl/master/install.sh | bashOr download and run it manually:
curl -fsSL https://raw.githubusercontent.com/felipecsl/hl/master/install.sh -o install.sh
chmod +x install.sh
./install.shThe script will:
- Prompt for your remote SSH username and hostname
- Test the SSH connection (requires SSH key authentication)
- Download the latest
hlrelease binary from Github scpit to your server- Create a wrapper script that invokes it via
ssh - Add the wrapper to your
$PATH
Note: This assumes SSH public key authentication is already configured. If not, set it up first with ssh-copy-id user@hostname.
To test the latest unreleased code from your local HEAD commit, run:
./install-head.shThis script archives your current local commit, builds hl locally from that commit, uploads the binary to the remote server, and creates the same local SSH wrapper.
First, build and copy the binary to your remote server:
cargo build --release
scp target/release/hl <host>:~/.local/binThen create a wrapper script for invoking hl on the remote host via ssh:
cat > ~/.local/bin/hl <<'BASH'
#!/usr/bin/env bash
set -euo pipefail
REMOTE_USER="${REMOTE_USER:-homelab}"
REMOTE_HOST="${REMOTE_HOST:-homelab.local}" # or your server fqdn
infer_hl_app() {
local -a apps
mapfile -t apps < <(
git remote -v 2>/dev/null \
| awk '{print $2}' \
| sed -nE 's#.*[:/]hl/git/([^/]+)\.git$#\1#p' \
| sort -u
)
case "${#apps[@]}" in
0) return 0 ;;
1) printf '%s\n' "${apps[0]}" ;;
*)
echo "Error: multiple hl remotes found (${apps[*]}). Set HL_APP explicitly." >&2
exit 1
;;
esac
}
APP_NAME="${HL_APP:-}"
if [ -z "$APP_NAME" ]; then
APP_NAME="$(infer_hl_app || true)"
fi
if [ -n "$APP_NAME" ]; then
ssh "${REMOTE_USER}@${REMOTE_HOST}" HL_APP="$APP_NAME" ~/.local/bin/hl "$@"
else
ssh "${REMOTE_USER}@${REMOTE_HOST}" ~/.local/bin/hl "$@"
fi
BASH
chmod +x ~/.local/bin/hlWhat this solves
- You have a single VPS/home server and multiple apps.
- You want Heroku-style “
git push→ build → deploy”, but:- no complex control planes,
- no multi-host orchestration,
- no hidden daemons updating containers behind your back.
Design goals
- Deterministic: deploy exactly the pushed commit, no working tree drift.
- Explicit: no Watchtower; restarts are performed by
hl. - Boring primitives: Git, Docker (Buildx), Traefik, Docker Compose, systemd.
- Ergonomics: one per-app folder on the server (
~/hl/apps/<app>), one systemd unit, minimal YAML. - Small blast radius: per-app everything (compose/env/config) — easy to reason about and recover.
Core flow
-
Push: You push to a bare repo on the server (e.g.,
~/hl/git/<app>.git). -
Hook →
hl deploy: The repo’spost-receivehook invokeshl deploywith--shaand--branch. -
Export commit:
hlexports that exact commit (viagit archive) to an ephemeral build context. -
Build & push image: Docker Buildx builds and pushes tags:
:<shortsha>,:<branch>-<shortsha>, and:latest.
-
Migrations (optional):
hlruns DB migrations in a one-off container using the new image tag. -
Retag and restart:
hlretags:latestto the new sha and restarts the app using systemd (which runsdocker composeunder the hood). -
Health-gate:
hlwaits until the app is healthy. Deploy completes only once healthy.
Runtime layout (per app)
~/hl/apps/<app>/
compose.yml # app service + Traefik labels
compose.<accessory>.yml # e.g., compose.postgres.yml
.env # runtime secrets (0600)
hl.yml # server-owned app config
pgdata/ ... # volumes (if using Postgres)
systemd: app-<app>.service # enabled at boot
Networking & routing
- Traefik runs separately and exposes
web/websecure. - Apps join a shared Docker network (e.g.,
traefik_proxy) and advertise via labels. - Certificates are issued by ACME (e.g., Route53 DNS challenge).
| Tool | What it is | Where hl differs |
|---|---|---|
| Watchtower | Image watcher that auto-updates containers | hl does not auto-update. Deploys are explicit and health-gated. |
| Kamal | SSH deploy orchestrator (blue/green, fan-out, hooks) | hl intentionally avoids multi-host/fleet features and blue/green. It’s a single-host release tool with simpler ergonomics. |
| Docker Swarm/K8s | Schedulers with service discovery and reconciliation loops | hl doesn’t introduce a scheduler. It leans on systemd + compose for simple, predictable runtime. |
Bottom line: hl is a small, single-host release manager that turns a Git push into a reproducible build and a clean, health-checked restart — with Traefik for ingress. No magic daemons, no control plane.
Pros
- Simplicity: Git hooks + Docker Buildx + Compose + systemd.
- Deterministic builds: every deploy uses
git archiveof the exact commit. - Fast rollback:
hl rollback <sha>retags and health-checks. - Clear logs:
journalctl -u app-<app>.servicefor runtime; deploy logs in hook/CLI output. - Separation of concerns: build (ephemeral) vs. runtime (per-app directory).
- Server-owned config: domains, networks, health, secrets stay off the image.
Cons / Trade-offs
- No blue/green: restarts are in-place (health-gated, but not traffic-switched).
- Single host: no parallel fan-out or placement strategies.
- Manual accessories: DBs/Redis are compose fragments, not managed clusters.
- Layer caching: ephemeral build contexts reduce cache reuse (you can configure a persistent workspace if needed).
Server-owned file at
~/hl/apps/<app>/hl.yml.
app: recipes
image: registry.example.com/recipes
domain: recipes.example.com
servicePort: 8080
resolver: myresolver # Traefik ACME resolver name
network: traefik_proxy # Docker network shared with Traefik
platforms: linux/amd64 # Buildx platforms
health:
url: http://recipes:8080/healthz
interval: 2s
timeout: 45s
migrations:
command: ["bin/rails", "db:migrate"]
env:
RAILS_ENV: "production"
secrets:
- RAILS_MASTER_KEY
- SECRET_KEY_BASEhl runs a short-lived curl container on the app network to hit http://<service>:<port><path>. Works even when nothing is published on host ports.
Optional container healthcheck in compose.yml keeps startup ordering crisp:
services:
recipes:
healthcheck:
test:
[
"CMD-SHELL",
"wget -qO- http://localhost:8080/healthz >/dev/null 2>&1 || exit 1",
]
interval: 5s
timeout: 3s
retries: 10hl accessory add postgres will:
-
Write
compose.postgres.ymlwith a healthypgservice on the same network. -
Add
depends_on: { pg: { condition: service_healthy } }to your app (via the fragment). -
Generate/update
.envwith:POSTGRES_USER,POSTGRES_PASSWORD,POSTGRES_DBDATABASE_URL=postgres://USER:PASSWORD@pg:5432/DB
-
Patch the systemd unit to run with both files:
docker compose -f compose.yml -f compose.postgres.yml up -d -
Restart the unit.
Same pattern can add Redis (
compose.redis.yml,REDIS_URL=redis://redis:6379/0) and others.
# Create runtime home, compose, hl.yml, systemd
hl init \
--app recipes \
--image registry.example.com/recipes \
--domain recipes.example.com \
--port 8080This creates:
~/hl/apps/recipes/{compose.yml,.env,hl.yml}app-recipes.service(enabled)
Add optional --build for build-time env vars (e.g., docker build secrets).
hl env set [--build] RAILS_MASTER_KEY=... SECRET_KEY_BASE=...
hl env ls # prints keys with values redactedhl accessory add postgres --version 16
# Writes compose.postgres.yml, updates systemd, restarts.git remote add production ssh://<user>@<host>/home/<user>/hl/git/recipes.git
git push production masterThe pipeline:
- Exports the pushed commit
- Builds & pushes image (
:<sha>,:<branch>-<sha>,:latest) - Runs migrations on
:<sha> - Retags
:latest→:<sha> - Restarts
app-recipes.service - Waits for health
hl rollback eef6fc6Retags :latest to the specified sha, restarts, and health-checks.
Command names/flags may differ in your Rust implementation, but this is the intended surface:
--appis only required forhl init. Other app-scoped commands expectHL_APP(set explicitly or by the local wrapper script).
-
hl init --app <name> --image <ref> --domain <host> --port <num> [--network traefik_proxy] [--resolver myresolver]Createcompose.yml,.env,hl.yml, and systemd unit. -
hl deploy --sha <sha> [--branch <name>]Export commit → build & push → migrate → retag → restart (systemd) → health-gate. -
hl rollback <sha>Retag:latest→<sha>, restart, health-gate. -
hl env set [--build] KEY=VALUE [KEY=VALUE ...]Update the app’s.env/.env.build(0600).hl env ls [--build]to list keys redacted. -
hl accessory add postgres [--version <v>] [--user <u>] [--database <name>] [--password <p>]Add Postgres as an accessory and wireDATABASE_URL. -
hl accessory add redis [--version <v>]Add Redis as an accessory and wireREDIS_URL.
services:
recipes:
image: registry.example.com/recipes:latest
restart: unless-stopped
env_file: [.env]
networks: [traefik_proxy]
labels:
- "traefik.enable=true"
- "traefik.http.routers.recipes.rule=Host(`recipes.example.com`)"
- "traefik.http.routers.recipes.entrypoints=websecure"
- "traefik.http.routers.recipes.tls.certresolver=myresolver"
- "traefik.http.services.recipes.loadbalancer.server.port=${SERVICE_PORT}"
networks:
traefik_proxy:
external: true
name: traefik_proxy- Env vars: keep in
.envwith mode0600. Do not bake secrets into images. - Registry auth: the server must be logged in to your registry prior to deploys.
- Traefik network: ensure one canonical network name (e.g.,
traefik_proxy) shared by Traefik and apps. - Backups: if using Postgres accessory, back up
pgdata/and consider nightlypg_dump. - Layer cache: if builds become slow, configure a persistent build workspace for better cache reuse.
- Accessories: Redis helper (compose fragment +
REDIS_URL), S3-compatible storage docs. - Hooks:
preDeploy/postDeploy(assets precompile, cache warmers). - Diagnostics:
hl status/logswrappingsystemctl/journalctlanddocker compose ps/logs. - Rollback UX:
hl releases <app>to list recent SHAs/tags with timestamps. - Build cache toggle: support persistent build workspace path in
homelab.yml. - Backup tasks:
hl pg backup/restorehelpers. - CI bridge: optional GitHub Actions job that invokes the server over SSH and runs
hl deploy.
Laptop Server
------ -------------------------------------
git push ───────────────────────────────▶ bare repo: <app>.git
post-receive → HL_APP=<app> hl deploy --sha --branch
├─ export commit (git archive) → ephemeral dir
├─ docker buildx build --push (:<sha>, :<branch>-<sha>, :latest)
├─ run migrations (docker run ... :<sha>)
├─ retag :latest → :<sha> + push
├─ systemctl restart app-<app>.service
└─ health-gate (docker-mode or http-mode)
Runtime
-------
Traefik ◀─────────────── docker network (traefik_proxy) ────────────────▶ App container
└──────(optional)▶ Postgres accessory
Q: Where does the build context come from?
A: From the bare repo you pushed to. hl uses git archive for the exact commit — no persistent working tree.
Q: Why not Watchtower for restarts? A: To keep rollouts explicit and health-gated in one place (the deploy command).
Q: Can I pin a version?
A: Yes. Use docker image tags directly or hl rollback <sha> to retag :latest to a known-good image.
Q: Can I use a public URL for health checks?
A: Yes — set health.mode: http with a URL. Docker-mode is preferred for independence from DNS/ACME.
- MIT
- Contributions welcome — especially adapters for accessories (Redis), backup helpers, and CI bridges.