This project demonstrates spike load testing using Locust. It simulates realistic traffic patterns with multiple stages including sudden spikes and drops in users.
- Running Locust locally (without Docker)
- Running Locust in Distributed mode (Master + Workers)
- Scaling up workers to generate thousands of RPS
- Using
.envfile to pass environment variables (API URL, headers, etc.)
locust-load-test/
│
├── locustfile.py # Main Locust entry point
├── requirements.txt # Dependencies
├── .env
├── docker-compose.yml
├── locust.conf
├── utils/
│ └── user_payload.py # Responsible for generating request payloads
└── README.md
- Uses FastHttpUser for high-performance HTTP requests
- Supports custom payloads for API requests
- Implements a spike load pattern using
LoadTestShape - Stops users gracefully after a set number of requests
- Fully modular and scalable structure for adding more endpoints
The test simulates the following stages:
| Stage | Duration (seconds) | Users | Spawn Rate |
|---|---|---|---|
| 1 | 0–10 | 50 | 5/sec |
| 2 | 10–40 | 200 | 20/sec |
| 3 | 40–60 | 500 | 200/sec |
| 4 | 60–75 | 100 | 10/sec |
| 5 | 75–95 | 400 | 50/sec |
Users are ramped up according to
spawn_rateto simulate real-world traffic spikes.
Following the defined pattern, Locust will automatically adjust the number of users as the test progresses.
The load test is designed to simulate realistic traffic patterns with multiple spikes and drops in user activity. The flow is as follows:
-
Initial Ramp-Up:
The test begins with 10 users, spawning at a rate of 2 users per second for the first 30 seconds. This simulates a slow start, allowing the system to warm up gradually. -
Moderate Load:
Over the next 60 seconds, the user count increases to 50 users at a spawn rate of 5 users per second. This stage represents normal traffic conditions. -
Temporary Drop:
After 90 seconds, the user count drops to 20 users for 30 seconds. This simulates a brief period of low activity, such as off-peak usage. -
Traffic Spike:
A sudden surge occurs, increasing the user count to 80 users over 40 seconds. This stage tests the system’s ability to handle sudden high traffic spikes. -
Post-Spike Decline:
Following the spike, the user count reduces to 30 users over 20 seconds, simulating a recovery period as traffic subsides. -
Gradual Shutdown:
Finally, the test enters a shutdown phase where the number of users gradually decreases to 0 users over 30 seconds. This stage ensures that the system returns to an idle state smoothly without abrupt drops.
This pattern helps evaluate the system’s stability and performance under varying load conditions, including sudden spikes and recovery periods. It is particularly useful for identifying bottlenecks and verifying scalability under real-world traffic scenarios.
+----------------+ +--------------------+
| Locust User | POST | PetStore /user |
| (FastHttpUser) |--------->| API Endpoint |
+----------------+ +--------------------+
| ^
| |
|<-------- Response --------|
|
v
Check status code → Success / Failuregit clone https://github.com/yourusername/locust-load-test.git
cd locust-load-testpython3 -m venv .venv
source .venv/bin/activatepip install -r requirements.txtcreate a .env file (as per directory structure)
Edit `.env` (example):
API_BASE_URL=https://your-api.com/v1/orders
AUTH_TOKEN=Bearer abcde12345
USERS=2000
SPAWN_RATE=200
RUN_TIME=5mInside Python (locustfile.py), access like:
import os
BASE_URL = os.getenv("API_BASE_URL")
AUTH_HEADER = os.getenv("AUTH_TOKEN")locustOpen your browser and navigate to:
http://localhost:8089
or
http://0.0.0.0:8089/From the Locust Web UI, you can start the test and monitor metrics such as response time, failures, and request per second.
- Build images
docker compose build- Run Master + Workers (detached mode)
docker compose up -d-
Runs Locust Master at port 8089
-
Starts multiple workers automatically
Enter users & spawn rate (or leave blank if env is configured).
- Scaling Workers to Increase RPS To increase RPS (ex: 6 workers):
docker compose up --scale worker=6 -d- Check running containers:
docker ps- Worker & Master Logs:
docker compose logs -f- Stop / Cleanup:
docker compose down-
Change host URL: Update host in .env
-
Add more endpoints: Create new tasks under sancus class
-
Randomize users: Modify utils/user_payload.py to generate dynamic payloads
-
Adjust load shape: Modify the stages list in SpikeLoadShape class
Locust Web UI provides real-time metrics:
-
Requests per second
-
Failure count and rate
-
Response times (min, max, avg, percentile)
-
Number of active users
-
Use FastHttpUser for high-concurrency tests
-
Modularize payloads and tasks for better maintainability
-
Use virtual environments to isolate dependencies
-
Start with small user counts and ramp up gradually during testing
-
Monitor system resources on the target API server
-
Ensure API supports idempotency to avoid test conflicts (e.g., duplicate users)
-
Log failures with detailed info for debugging
- Locust Official Documentation – Complete guide for Locust features, APIs, and load test setup.
- Swagger PetStore API – Official demo API used for testing.
- FastHttpUser for High Performance – Documentation on FastHttpUser for concurrent load tests.
- LoadTestShape for Custom Load Patterns – Guide for creating custom user load patterns.
- Virtual Environments in Python – How to create isolated Python environments for dependencies.
- Geven Library – Underlying library used by Locust for asynchronous HTTP requests.
Feel free to PR improvements.