Welcome to the team! This document will give you a tour of the Bank of Anthos application's architecture. We'll keep it simple and explain things from the ground up.
At its core, this project is a web-based banking application. Users can log in, see their account balances, and make transactions.
Instead of being one giant, single program (a "monolith"), our application is built as a set of small, independent programs that work together. This style is called Microservices. Think of it like a team of specialists: one person handles user logins, another handles account data, and a third handles transaction history. They each have a specific job and communicate with each other to get things done.
Here are the main "specialists" (services) that make up our application. You can find the source code for each in the /src/ directory.
-
frontend: This is the user-facing part of the application—the website you see in your browser. It's written in Python. It receives requests from users and then talks to the other backend services to get the information it needs. -
userservice: This service manages everything related to users: creating new users, storing user data, and handling logins. -
contacts: Manages a user's list of contacts for sending payments. -
accounts: Responsible for managing user bank accounts and their balances. -
ledgerwriter: When a user sends money, this service is responsible for recording that transaction in the "ledger" (our database of all transactions). -
transactionhistory: This service provides the history of all transactions for a given account. -
loadgenerator: This is a utility service, not part of the core application. Its job is to simulate user traffic to test how our application performs under load.
accounts-db: A PostgreSQL database that stores user account and balance information.ledger-db: A PostgreSQL database that stores the history of all transactions.
You can't just run these programs on your laptop for the world to see. They need to run on powerful, reliable infrastructure.
Each of our microservices is packaged into a container. A container is like a standardized, lightweight box that holds everything a program needs to run: the code, libraries, and settings.
But how do we manage all these boxes? That's where Kubernetes comes in. Think of Kubernetes as a sophisticated robot manager for our containers. It does things like:
- Orchestration: Starts, stops, and organizes all our service containers.
- Scaling: If one service gets busy, Kubernetes can automatically create more copies of it to handle the load.
- Self-healing: If a container crashes, Kubernetes automatically restarts it.
We use Google Kubernetes Engine (GKE), which is a managed Kubernetes service provided by Google Cloud.
You can see how we configure our services for Kubernetes in the kubernetes-manifests/ directory.
How do we create our Kubernetes clusters, networks, and databases in the first place? Instead of clicking buttons in the Google Cloud console, we define all of our infrastructure in code. This is called Infrastructure as Code (IaC).
We use a tool called Terraform for this. The configuration files in the iac/ directory tell Terraform exactly what our cloud environment should look like. This is powerful because it makes our setup repeatable, version-controlled, and easy to change.
Running a dozen microservices on your development machine can be complicated. We use a tool called Skaffold to make this easy.
The skaffold.yaml files are configuration for this tool. When you're ready to develop, you can run a single Skaffold command. It will then:
- Build container images for any code you've changed.
- Deploy them to a local or remote Kubernetes cluster.
- Stream all the logs from all the services to your terminal.
It watches your files for changes, so when you save a file, it automatically repeats the process. It's a huge time-saver!
Continuous Integration/Continuous Deployment (CI/CD) is the automated process that takes new code from a developer's machine and gets it into production.
Our CI/CD pipelines are defined in the .github/workflows/ and .github/cloudbuild/ directories. When a developer pushes new code to GitHub, these pipelines automatically:
- Run tests to make sure the new code didn't break anything.
- Build new container images.
- Deploy the new images to our different environments (like
stagingandproduction).
Here's a simplified diagram showing how a user request flows through the system.
graph TD
subgraph "User's Browser"
A[Website]
end
subgraph "Kubernetes Cluster (GKE)"
B(Frontend)
C(Userservice)
D(Accounts)
E(Transaction History)
F(Ledger Writer)
subgraph "Databases"
G[Accounts DB]
H[Ledger DB]
end
end
A --> B;
B --> C;
B --> D;
B --> E;
B --> F;
D --> G;
E --> H;
F --> H;