Ensure you have the following installed on your machine:
- Ubuntu OS
- Docker
- Docker Compose
-
Update your package list and install dependencies:
sudo apt-get update sudo apt-get install \ ca-certificates \ curl \ gnupg -
Add Docker's official GPG key:
sudo install -m 0755 -d /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg sudo chmod a+r /etc/apt/keyrings/docker.gpg -
Set up the Docker repository:
echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
-
Install Docker Engine:
sudo apt-get update sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
-
Verify that Docker Engine is installed correctly:
sudo docker run hello-world
-
Clone the repository:
git clone <repository-url> cd <repository-directory>
-
Build and start the Docker containers:
docker-compose up -d
Navigate to the project root directory where your docker-compose.yml and Makefile are located and run the following commands:
make build
make up
make logs
make down
make clean
make restart
To run the performance tests and generate the charts, follow these steps:
-
Ensure that the load balancer and server containers are running:
docker-compose up -d
-
Run the performance test script:
python performance_test.py
-
View the generated charts in the
chartsfolder.
The test script will send 10,000 requests to the load balancer, collect the responses, and generate the charts showing the request distribution across different server counts.
A Python script was used to send 10,000 requests to the API endpoint. The responses were analyzed to extract the server number handling each request, providing insights into the request distribution across servers.
10,000 asynchronous requests were launched on server_count = 3 server containers, and the request count handled by each server instance is shown below.
The chart reveals that server 3 handled more than 50% of the total requests. The hash function in the load balancer distributes requests unevenly across servers, resulting in an imbalanced distribution.
When launched on server_count = 2 server containers, the bar chart showed that server 1 handled over 80% of the requests.
For server_count = 4 servers, the chart indicates that servers 1 and 2 handled over 80% of the requests collectively.
With server_count = 5 servers, the chart showed that servers 1, 2, 3, and 4 handled a similar number of requests, while server 5 handled less than 10% of the total.
Finally, when launched on server_count = 6 server containers, the bar chart revealed that all servers handled a similar percentage of the total requests.
The results demonstrate that as the number of servers increases, the hash function in the load balancer distributes the requests more evenly across servers, reducing the imbalance.
10,000 asynchronous requests were launched on server_count = 3 server containers using a modified hash function, and the request distribution is presented below.
The chart shows that server 2 handled more than 50% of the requests, indicating an uneven distribution.
With server_count = 2 servers, the chart revealed that server 2 handled over 80% of the requests.
When launched on server_count = 4 server containers, the chart below shows that servers 2, 3 and 4 handled over 90% of the requests collectively.
With server_count = 5 servers, the line chart indicated that servers 1, 2, 3, and 4 handled a similar number of requests, while server 5 handled less than 10% of the total.
Finally, for server_count = 6 servers, the chart reveales that all servers handled a similar percentage of the total requests.
The results suggest that while the modified hash function distributes the load evenly for a smaller number of servers, the distribution becomes skewed as the number of servers increases, with some servers handling significantly fewer requests than others.









