hpurls is a deliberately small URL shortener written in Go.
This project is not about novelty or scale. It focuses on making sound backend engineering decisions, the kind you’d expect in a production service, while keeping scope intentionally limited.
A URL shortener is deceptively simple.
It has:
- concurrent HTTP handling
- shared state and synchronization
- persistence + caching
- rate limiting
- observability
- graceful shutdown
All without hiding complexity behind heavy frameworks or infrastructure.
This project treats the URL shortener as a vehicle for good engineering practices, not as a product.
- HTTP API for shortening and resolving URLs
- SQLite-backed persistent storage
- Concurrency-safe in-memory cache
- Token-bucket rate limiting
- Structured logging using
log/slog - Prometheus-compatible metrics (
/metrics) - Health check endpoint (
/healthz) - Graceful shutdown on SIGINT/SIGTERM
- Configuration via environment variables
- Docker-ready
.
├── README.md
├── cache
│ └── cache.go
├── config
│ └── config.go
├── data.db
├── db.go
├── go.mod
├── go.sum
├── handlers.go
├── hpurls
├── logger
│ ├── logger.go
│ └── middleware.go
├── main.go
├── metrics
│ ├── metrics.go
│ └── middleware.go
└── ratelimit
└── rate-limiter.go
The feature set is intentionally small.
The goal is correctness, clarity, and operational awareness and not feature breadth.
SQLite provides durability while keeping the service:
- self-contained
- easy to deploy
- free of external infrastructure dependencies
This mirrors common early-stage production setups.
Go HTTP handlers are concurrent by default.
Shared state (cache, metrics) is:
- owned by a
Serverstruct - accessed through well-defined methods
- synchronized internally
This avoids accidental data races and makes concurrency boundaries explicit.
An in-memory cache sits in front of the database to reduce read pressure.
The cache encapsulates its own mutex and exposes safe Get / Set methods.
Cache eviction is intentionally omitted to keep the focus on correctness rather than policy.
Token-bucket rate limiting is applied per endpoint to:
- prevent abuse
- protect shared resources
- demonstrate production-aware request handling
The service exposes Prometheus-compatible metrics via /metrics.
No Prometheus server is bundled or required — metrics collection is expected to be handled externally, as in real deployments.
On SIGINT/SIGTERM, the server:
- stops accepting new requests
- waits for in-flight requests
- shuts down cleanly
- closes the database connection
POST /shorten— create a short URLGET /{short}— resolve and redirectGET /metrics— Prometheus-compatible metricsGET /healthz— health check
This is:
- a clean, idiomatic Go backend
- intentionally small
- production-minded
This is not:
- a globally scalable system
- a cache-eviction showcase
- an infrastructure-heavy service
Those omissions are deliberate.
git clone https://github.com/segfaultscribe/hpurls.git
cd hpurls
go mod tidy
go run .This project prioritizes engineering judgment over feature breadth.
It’s meant to be small, understandable, and correct. A service that could realistically exist as part of a larger system.