A fully automated benchmarking suite comparing popular backend frameworks (Python, Go, Node, etc.). It measures performance on a consistent JSON-processing route and outputs a standalone HTML dashboard with visual results.
- Benchmarks dozens of frameworks across languages
- Measures Requests/sec, Latency, Transfer/sec
- Generates a zero-dependency HTML dashboard with charts + system info (
results_dashboard.html)
- Python: Flask (with Gunicorn), FastAPI (with Uvicorn), Django (with Uvicorn + Gunicorn)
- JavaScript Runtimes:
- Node: Node (native), Express, Fastify, NestJS (Express), NestJS (Fastify), Koa
- Bun: Bun (native), Express, Hono, Elysia
- Deno: Deno (native), Express, Hono
- Go: Gin, Echo, Fiber, Native
net/http - PHP: Lavarel (with FrankenPHP)
- Java: Spring Boot
- C#: ASP.NET Core
- Ruby: Ruby on Rails
- Input JSON:
{ "numbers": [1, 2, 3, 4, 5] } - Task: Compute the sum of squares of the numbers
- Output JSON:
{ "result": 55 }
Each framework is tested using:
wrk -t"$CORES" -c1000 -d60s -s post.lua http://localhost:3000/processbackend-benchmark/
├── <framework>/ # One folder per framework
│ └── install.sh # Install dependencies of the framework (optional)
│ └── start.sh # Starts that framework's server
├── results/ # Results ouput folder
│ └── raw/ # Raw wrk output per framework
│ └── results_summary.csv # Parsed performance data
│ └── results_dashboard.html # Interactive HTML report
├── post.lua # wrk load script
├── benchmark_presetup.sh # Install all dependencies
├── benchmark_runner.sh # Run benchmarks
└── parse_html.py # Parses wrk results into charts
Install the following tools:
Ensure each tool is available in your PATH.
git clone https://github.com/Drarox/Backend-Benchmark.git
cd backend-benchmark./benchmark_presetup.sh./benchmark_runner.shThis will:
- Sequentially start each framework server
- Run a high-load test using
wrk - Kill the server
- Save results in
results/ - Generate
results_dashboard.html
Don't want to install Python, Node, Go, Deno, Bun, or wrk on your machine? No problem: everything runs cleanly inside a container. The Docker image installs all the prerequisites !
docker build -t backend-benchmark .docker run --rm -v "$PWD/results:/app/results" backend-benchmark- Benchmarks all frameworks
- Generates the HTML dashboard
- Mounts results to your host in the
results/folder
This lets you run the full suite with zero host setup and clean everything up with one docker rmi.
Results from Octoboer 20, 2025
Changelog: Lavarel, Spring Boot, ASP.NET Core, Ruby On Rails and Koa added
-
macOS – M1 Pro (8-core), 16 GB RAM — View dashboard
-
macOS (Docker Desktop) – M1 Pro (8-core), 8 GB RAM — View dashboard
Results from May 28, 2025
Changelog: First run of the benchmark
-
macOS – M1 Pro (8-core), 16 GB RAM — View dashboard
-
macOS (Docker Desktop) – M1 Pro (8-core), 8 GB RAM — View dashboard
-
Ubuntu (Docker) – Xeon D-1531 (6-core), 32 GB RAM — View dashboard
-
Ubuntu – Xeon D-1531 (6-core), 32 GB RAM — View dashboard
-
Ubuntu (Docker) – AMD Ryzen 7 1700X (8-core), 24 GB RAM — View dashboard
Results from October 10, 2025
Changelog: Runtime and package updated
-
macOS – M1 Pro (8-core), 16 GB RAM — View dashboard
-
macOS (Docker Desktop) – M1 Pro (8-core), 8 GB RAM — View dashboard
-
Ubuntu (Docker) – Xeon D-1531 (6-core), 32 GB RAM — View dashboard
-
Ubuntu – Xeon D-1531 (6-core), 32 GB RAM — View dashboard
Pull requests welcome! Add frameworks, improve charts, or enhance the automation. Feel free to fork this repository and submit a pull request with your changes.
MIT — use freely, modify openly, benchmark responsibly.