Deployment procedures and strategies for itsup infrastructure.
Deployment Philosophy: Zero-downtime deployments with smart change detection.
Key Features:
- Configuration-driven (YAML)
- Smart rollout (only deploys when config changed)
- Parallel deployment (multiple projects at once)
- Health check integration
- Automatic artifact generation
User Change → itsup apply → Config Hash → Changed? → Generate Artifacts → Docker Compose Up
↓
No → Skip (already deployed)
How it works:
- Calculate MD5 hash of project configuration (
docker-compose.yml+ingress.yml) - Compare with hash stored in running container labels
- If different: Deploy
- If same: Skip
Benefits:
- Avoids unnecessary container restarts
- Faster operations (no-op for unchanged projects)
- Clear feedback (shows which projects deployed)
Strategy: Docker Compose up with scale-based blue-green deployment.
For multi-container services:
- Start new container(s) with updated config
- Wait for health checks to pass
- Stop old container(s)
- Remove old container(s)
For single-container services:
- Stop old container
- Start new container with updated config
Zero-downtime: Traefik automatically routes to healthy containers.
itsup runWhat it does:
- Start DNS stack (creates
proxynetnetwork) - Start Proxy stack (Traefik + dockerproxy)
- Start API (host process)
- Start Monitor (host process)
Use case: Initial setup, full system restart after reboot.
itsup applyWhat it does:
- Load all projects from
projects/directory - Calculate config hash for each
- For changed projects (in parallel):
- Regenerate
upstream/{project}/docker-compose.yml - Run
docker compose up -d
- Regenerate
- Report results (deployed, skipped, failed)
Use case: Deploy all changes after configuration updates.
Output example:
✓ project-a deployed (config changed)
○ project-b skipped (no changes)
✗ project-c failed (docker error)
itsup apply <project>What it does:
- Load project from
projects/{project}/ - Calculate config hash
- If changed:
- Regenerate
upstream/{project}/docker-compose.yml - Run
docker compose up -d
- Regenerate
- Report result
Use case: Deploy changes to specific project.
itsup svc <project> up [service] # Start service(s)
itsup svc <project> down [service] # Stop service(s)
itsup svc <project> restart [service] # Restart service(s)
itsup svc <project> pull [service] # Pull image updatesUse case: Manual service management without regenerating config.
1. Edit project docker-compose.yml:
vim projects/{project}/docker-compose.yml2. Deploy:
itsup apply {project}What happens:
- Config hash changes
- Artifacts regenerated
- Containers restarted with new config
1. Edit ingress.yml:
vim projects/{project}/ingress.ymlExample - Add new route:
enabled: true
ingress:
- service: web
domain: app.example.com
port: 3000
router: http
- service: api # NEW
domain: api.example.com
port: 8080
router: http2. Deploy:
itsup apply {project}What happens:
- Traefik labels regenerated
- Containers recreated with new labels
- Traefik picks up new routes automatically
1. Edit itsup.yml:
vim projects/itsup.yml2. Regenerate proxy config:
bin/write_artifacts.py # Regenerates all configs3. Restart proxy:
itsup proxy restart1. Edit traefik.yml overrides:
vim projects/traefik.ymlExample - Add custom middleware:
http:
middlewares:
rate-limit:
rateLimit:
average: 100
burst: 502. Regenerate and restart:
bin/write_artifacts.py
itsup proxy restart traefik- Backup current configuration (
git commit && git push) - Validate configuration (
itsup validateoritsup validate {project}) - Review changes (
git diff) - Check secrets are up-to-date (
ls -l secrets/*.enc.txt) - Verify sufficient disk space (
df -h) - Check no containers in error state (
docker ps -a --filter "status=exited")
Correct order:
- DNS stack (
itsup dns up) - Proxy stack (
itsup proxy up) - API (
bin/start-api.shor systemd) - Monitor (
itsup monitor startor systemd) - Projects (
itsup apply)
Why this order:
- DNS creates
proxynetnetwork (required by proxy) - Proxy creates Traefik (required by projects for routing)
- API and Monitor can start anytime after proxy
Easy way: itsup run (does it all automatically in report-only mode for monitor)
Note: itsup run starts the monitor in report-only mode (detection without blocking). For full protection with active blocking, use itsup monitor start after infrastructure is running.
Check infrastructure:
docker network ls | grep proxynet # Network exists
itsup proxy logs traefik | tail -20 # Traefik healthy
curl http://localhost:80/health # Traefik respondingCheck project:
itsup svc {project} ps # Containers running
itsup svc {project} logs -f # Check logs for errors
curl https://{domain}/health # Service respondingCheck Traefik routing:
# Traefik should log new routes discovered
itsup proxy logs traefik | grep "Adding route"Quick rollback (git-based):
# Find last known good commit
git log --oneline | head -5
# Rollback to specific commit
git checkout <commit-hash> -- projects/{project}/
# Deploy rollback
itsup apply {project}Emergency rollback (stop service):
# Stop problematic service immediately
itsup svc {project} down
# Or specific service
itsup svc {project} down {service}
# Fix and redeploy when readySetup:
# Create project directory
mkdir -p projects/my-app
# Create docker-compose.yml
cat > projects/my-app/docker-compose.yml <<EOF
services:
web:
image: nginx:latest
volumes:
- ./html:/usr/share/nginx/html:ro
networks:
- proxynet
networks:
proxynet:
external: true
EOF
# Create ingress.yml
cat > projects/my-app/ingress.yml <<EOF
enabled: true
ingress:
- service: web
domain: my-app.example.com
port: 80
router: http
EOFDeploy:
itsup apply my-appVerify:
itsup svc my-app ps
curl https://my-app.example.comUpdate docker-compose.yml:
vim projects/{project}/docker-compose.yml
# Change:
# image: app:v1.0
# To:
# image: app:v2.0Deploy:
itsup apply {project}What happens:
- Config hash changes (image version changed)
docker compose pullpulls new imagedocker compose up -drecreates container with new image- Old container removed after new one is healthy
Update docker-compose.yml:
services:
web:
image: nginx:latest
# ... existing config
api: # NEW SERVICE
image: node:20-alpine
command: npm start
environment:
- PORT=3000
networks:
- proxynetUpdate ingress.yml:
ingress:
- service: web
domain: app.example.com
port: 80
router: http
- service: api # NEW ROUTE
domain: api.example.com
port: 3000
router: httpDeploy:
itsup apply {project}Result: New service starts, new route added to Traefik.
If using secrets file:
# Edit secrets file
vim secrets/{project}.txt
# Add/change variable
NEW_VAR=value
# Re-encrypt
itsup encrypt {project}
# Commit
git add secrets/{project}.enc.txt
git commit -m "Update {project} secrets"If in docker-compose.yml:
services:
app:
environment:
- NEW_VAR=${NEW_VAR} # Placeholder for secrets fileDeploy:
itsup apply {project}Important: Secrets are loaded from secrets/itsup.txt + secrets/{project}.txt at deployment time.
Update docker-compose.yml:
services:
web:
image: nginx:latest
deploy:
replicas: 3 # Run 3 instances
# ... rest of configDeploy:
itsup apply {project}Or manually scale:
docker compose -f upstream/{project}/docker-compose.yml up -d --scale web=3Result: Traefik automatically load-balances across all replicas.
Symptom: itsup apply never completes.
Check:
# Look for unhealthy containers
docker ps -a --filter "health=unhealthy"
# Check logs for startup errors
itsup svc {project} logsFix:
# Stop deployment
Ctrl+C
# Force stop containers
itsup svc {project} down
# Check and fix configuration
vim projects/{project}/docker-compose.yml
# Retry
itsup apply {project}Symptom: Changed config but containers not updating.
Reason: Config hash not changing (maybe wrong file edited).
Check:
# Verify you edited projects/, not upstream/
ls -l projects/{project}/docker-compose.yml
ls -l upstream/{project}/docker-compose.yml
# Check current vs running config hash
docker compose -f upstream/{project}/docker-compose.yml config --hash "*"
docker inspect {container} --format '{{index .Config.Labels "com.docker.compose.config-hash"}}'Fix:
# Force deployment by stopping containers (removes hash labels)
itsup svc {project} down
# Deploy again
itsup apply {project}Symptom: Container starts but environment variables are empty or wrong.
Check:
# Verify secrets file exists and is decrypted
cat secrets/{project}.txt
# Check encryption
itsup decrypt {project}Fix:
# If .txt is empty but .enc.txt exists
itsup decrypt {project}
# Deploy again (ensure secrets are loaded)
itsup apply {project}Symptom: Container can't reach other containers or internet.
Check:
# Verify proxynet network exists
docker network ls | grep proxynet
# Verify container is connected
docker inspect {container} | grep -A 10 NetworksFix:
# Recreate network (if missing)
itsup dns down
itsup dns up
# Reconnect container
itsup svc {project} restartSymptom: Traefik not routing to service, health checks failing.
Check:
# View container health status
docker ps --filter "name={project}"
# Check health check logs
docker inspect {container} | jq '.[0].State.Health'Fix:
# Review health check configuration
vim projects/{project}/docker-compose.yml
# Adjust health check:
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost/health"]
interval: 10s
timeout: 5s
retries: 3
start_period: 30s # Increase if service is slow to startSymptom: Deployment fails with "image not found" or "pull access denied".
Check:
# Try pulling manually
docker pull {image}:{tag}
# Check registry authentication
docker login {registry}Fix:
# For private registries, add credentials
docker login ghcr.io -u username -p token
# Or configure in docker-compose.yml
services:
app:
image: ghcr.io/user/app:latest
environment:
- DOCKER_AUTH_CONFIG={"auths": {...}}Watch deployment progress:
# Via CLI
itsup apply {project} --verbose
# Via logs
tail -f logs/api.log | grep -i deployCheck all containers:
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"Check specific project:
itsup svc {project} psWatch Traefik discover new routes:
itsup proxy logs traefik | grep -E "(Adding route|Register|Configuration)"Check resource consumption:
docker stats --no-stream
# Shows CPU, memory, network for all containersFor critical services, use explicit blue-green strategy:
# docker-compose.yml
services:
web-blue:
image: app:v1
labels:
- traefik.http.services.web.loadbalancer.server.port=3000
networks:
- proxynet
web-green:
image: app:v2
labels:
- traefik.http.services.web.loadbalancer.server.port=3000
networks:
- proxynet
# Initially scale=0Deploy:
# Start green
docker compose up -d --scale web-green=1
# Test green
curl https://app.example.com # Should hit green
# Switch traffic (Traefik automatically load-balances)
docker compose up -d --scale web-blue=0
# Cleanup
docker compose rm -f web-blueUse Traefik weights for gradual rollout:
# ingress.yml with custom labels
labels:
- traefik.http.services.web-v1.loadbalancer.server.port=3000
- traefik.http.services.web-v1.loadbalancer.weight=90
- traefik.http.services.web-v2.loadbalancer.server.port=3000
- traefik.http.services.web-v2.loadbalancer.weight=10
- traefik.http.routers.web.service=web-weighted
- traefik.http.services.web-weighted.weighted.services[0].name=web-v1
- traefik.http.services.web-weighted.weighted.services[0].weight=90
- traefik.http.services.web-weighted.weighted.services[1].name=web-v2
- traefik.http.services.web-weighted.weighted.services[1].weight=10Gradually shift traffic: Change weights over time (90/10 → 50/50 → 10/90 → 0/100).
For services with databases, coordinate migrations with deployments:
# docker-compose.yml
services:
migrate:
image: app:v2
command: npm run migrate
# Run once, then exit
restart: "no"
depends_on:
- db
app:
image: app:v2
depends_on:
migrate:
condition: service_completed_successfullyDeploy:
itsup apply {project}
# Migrations run first, then app startsname: Deploy
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Deploy via SSH
uses: appleboy/ssh-action@master
with:
host: ${{ secrets.SERVER_IP }}
username: morriz
key: ${{ secrets.SSH_KEY }}
script: |
cd /home/morriz/srv
git pull
source env.sh
itsup applyValidate before commit:
# .git/hooks/pre-commit
#!/bin/bash
source env.sh
itsup validate
if [ $? -ne 0 ]; then
echo "Validation failed!"
exit 1
fi- Deployment History: Track all deployments with timestamps and outcomes
- Automatic Rollback: Auto-rollback on health check failures
- Deployment Locking: Prevent concurrent deployments
- Progressive Delivery: Automated canary analysis with metrics
- Change Impact Analysis: Predict which services will be affected by config changes
- Deployment Approval: Require manual approval for production deployments