A lightweight broadcast relay server for Source 2 game servers. It receives HLTV/GOTV broadcast fragments pushed by the game server, caches them, and serves them to any number of clients — decoupling producers from consumers.
Works with any Source 2 title (Counter-Strike 2, Deadlock, Dota 2, etc.).
Source 2 game servers can broadcast HLTV data to a single URL via tv_broadcast_url. This relay acts as that endpoint:
- Cache & distribute — one game server connection, unlimited consumers
- Extracting live game events — pair with deadlock-live-events or build your own consumer
- Simple to self-host — single binary or Docker image, minimal configuration
Without a relay, every consumer would need direct access to the game server's GOTV port. The relay lets the game server push once, and clients pull on their own schedule.
docker run -d \
-p 3000:3000 \
-e HLTV_RELAY_AUTH_MODE=allow-all \
ghcr.io/deadlock-api/hltv-relay:latestservices:
hltv-relay:
image: ghcr.io/deadlock-api/hltv-relay:latest
restart: unless-stopped
ports:
- "3000:3000"
environment:
HLTV_RELAY_AUTH_MODE: "key"
HLTV_RELAY_AUTH_KEY: "your-secret-key"
HLTV_RELAY_STORAGE: "redis"
HLTV_RELAY_REDIS_URL: "redis://redis:6379"
redis:
image: redis:8-alpine
restart: unless-stoppedcargo build --release
./target/release/hltv-relay --auth-mode allow-allConfigure your Source 2 game server to push HLTV broadcasts to the relay:
+tv_enable 1
+tv_broadcast 1
+tv_port 27020
+tv_delay 0
+tv_broadcast_url http://your-relay-host:3000/
If you're using key-based authentication:
+tv_broadcast_origin_auth your-secret-key
The game server will POST fragments to the relay automatically. The broadcast token (in the URL path) follows the format
s<STEAM_ID>t<TIMESTAMP> and is generated by the game server.
All options can be set via CLI flags or environment variables. Precedence: CLI flags > environment variables > defaults.
| Option | Env Var | CLI Flag | Default | Description |
|---|---|---|---|---|
| Port | HLTV_RELAY_PORT |
--port |
3000 |
Server listen port |
| Host | HLTV_RELAY_HOST |
--host |
0.0.0.0 |
Bind address |
| Storage | HLTV_RELAY_STORAGE |
--storage |
memory |
memory or redis |
| Redis URL | HLTV_RELAY_REDIS_URL |
--redis-url |
redis://127.0.0.1:6379 |
Redis connection string |
| Auth mode | HLTV_RELAY_AUTH_MODE |
--auth-mode |
(none — denies all writes) | Comma-separated: allow-all, key, network |
| Auth key | HLTV_RELAY_AUTH_KEY |
--auth-key |
— | Secret for key auth mode |
| Allowed networks | HLTV_RELAY_ALLOWED_NETWORKS |
--allowed-networks |
— | CIDR ranges for network auth mode (e.g. 10.0.0.0/8,172.16.0.0/12) |
| Fragment delay | HLTV_RELAY_FRAGMENT_DELAY |
--fragment-delay |
8 |
Fragments behind latest before serving to clients |
Authentication only applies to write (POST) endpoints. Read endpoints are always public.
| Mode | Description |
|---|---|
allow-all |
Accept all writes without verification |
key |
Require X-Origin-Auth header matching the configured auth key. This is what tv_broadcast_origin_auth sends. |
network |
Accept writes from IPs within the configured CIDR ranges. Uses X-Forwarded-For if present (for reverse proxies), otherwise the peer IP. |
Modes can be combined (comma-separated). If any mode passes, the request is allowed. If no modes are configured, * all writes are denied*.
Stores fragments in-memory. Simple, no dependencies, but data is lost on restart. Old fragments are automatically pruned (keeps ~1200 per match, roughly 1 hour of broadcast data).
Best for: development, single-instance deployments, low-latency setups.
Stores fragments in Redis with a 1-hour TTL. Survives relay restarts and can be shared across multiple relay instances.
Best for: production deployments, high availability, multi-instance setups.
| Method | Path | Response | Description |
|---|---|---|---|
GET |
/{token}/sync?fragment={n} |
JSON | Sync metadata (current tick, fragment, map, protocol, etc.) |
GET |
/{token}/{fragment}/start |
Binary | Start frame (contains TPS, protocol, map name) |
GET |
/{token}/{fragment}/full |
Binary | Full game state snapshot |
GET |
/{token}/{fragment}/delta |
Binary | Incremental delta update |
GET |
/health |
Text | Health check (tests storage connectivity) |
| Method | Path | Query Params | Description |
|---|---|---|---|
POST |
/{token}/{fragment}/start |
tick, tps, map, protocol |
Store start frame |
POST |
/{token}/{fragment}/full |
tick |
Store full snapshot |
POST |
/{token}/{fragment}/delta |
endtick, final |
Store delta update |
Clients receive fragments with a configurable delay behind the latest ingested data (default: 8 fragments), ensuring smooth streaming despite network jitter.
The read API is compatible with the Source 2 HLTV broadcast protocol. You can build your own consumer by polling the
/sync endpoint and fetching fragments, or use an existing project:
- deadlock-live-events — extracts live game events from broadcast data
- Reverse proxy: If running behind nginx/caddy/traefik, make sure
X-Forwarded-Foris set correctly if you usenetworkauth mode. - Redis: For production, use Redis storage to survive restarts. Any Redis-compatible server works (Redis, KeyDB, Valkey, DragonflyDB).
- Resources: Memory usage scales with the number of active matches. Each match retains ~1 hour of fragments. The relay itself is lightweight.
- Multi-instance: With Redis storage, you can run multiple relay instances behind a load balancer.
Requires Rust 1.94+.
cargo build --release
cargo testDocker images are built for linux/amd64 and linux/arm64.