Karbos is a carbon-aware workload orchestrator that reduces Scope 3 cloud emissions by time-shifting non-urgent compute tasks to greener energy windows.
Features β’ Quick Start β’ Architecture β’ How It Works β’ Documentation
Cloud computing accounts for 2-4% of global carbon emissions β equivalent to the aviation industry. Traditional job schedulers execute tasks immediately, regardless of grid carbon intensity. This means your batch jobs, data pipelines, and background processing might run during peak coal/gas hours.
Karbos intelligently schedules non-urgent workloads to execute during low-carbon windows while respecting SLA deadlines. By analyzing real-time grid carbon intensity forecasts, it automatically delays jobs to greener time slots β reducing your Scope 3 emissions by up to 30% without infrastructure changes.
- β‘ Carbon-Aware Scheduling - Automatically time-shifts jobs to low-carbon windows
- π Real-Time Dashboard - Monitor emissions savings, job queue, and infrastructure
- π Grid Intelligence - Integrates with ElectricityMaps API for live carbon data
- β° SLA Compliance - Guarantees deadline adherence while optimizing for carbon
- π Smart Queue Management - Dual-queue system (immediate + delayed execution)
- π Prometheus Metrics - Export COβ savings, queue depth, and worker health
- π³ Production-Ready - Complete Docker deployment with health checks
- π§ Circuit Breaker - Graceful degradation when carbon APIs are unavailable
graph TB
subgraph "Frontend Layer"
A[Next.js Dashboard<br/>Port 3000]
end
subgraph "API Layer"
B[Go + Fiber API<br/>Port 8080]
M[Prometheus Metrics<br/>Port 9090]
end
subgraph "Carbon Intelligence"
C[Carbon Scheduler]
D[ElectricityMaps API]
E[Circuit Breaker]
end
subgraph "Data Layer"
F[(PostgreSQL<br/>Jobs, Logs, Cache)]
G[(Redis<br/>Message Queue)]
end
subgraph "Execution Layer"
H[Worker Pool]
I[Docker Engine]
end
A -->|HTTP/REST| B
B --> C
C -->|Fetch Carbon Data| D
C --> E
B --> F
B --> G
G -->|Dequeue Jobs| H
H -->|Execute in Containers| I
B --> M
style A fill:#61DAFB,stroke:#333,stroke-width:2px
style B fill:#00ADD8,stroke:#333,stroke-width:2px
style C fill:#10B981,stroke:#333,stroke-width:2px
style F fill:#336791,stroke:#333,stroke-width:2px
style G fill:#DC382D,stroke:#333,stroke-width:2px
style H fill:#FF6B6B,stroke:#333,stroke-width:2px
Karbos uses a time-based optimization algorithm to minimize carbon footprint:
- Job Submission: User submits a job with a deadline (e.g., "complete within 6 hours")
- Carbon Forecast Analysis: System fetches 24-hour grid intensity forecast
- Optimal Window Selection: Algorithm finds the lowest carbon intensity window before deadline
- Smart Scheduling Decision:
If current_intensity <= optimal_intensity + threshold: β‘ Execute immediately Else: β° Schedule for optimal window - Queue Management: Jobs are placed in:
- Immediate Queue (FIFO) - Execute now
- Delayed Set (sorted by timestamp) - Execute later
- Promoter Service: Every 10s, checks delayed jobs and promotes to immediate queue when scheduled time arrives
current_intensity := getCurrentGridIntensity() // e.g., 450 gCOβ/kWh
optimal_intensity := findLowestIntensityWindow() // e.g., 250 gCOβ/kWh
carbon_savings := current_intensity - optimal_intensity
// Result: 200 gCOβ/kWh saved (44% reduction)For a typical 1-hour compute job consuming 10 kWh:
- Without Karbos: 450g Γ 10 = 4,500g COβ
- With Karbos: 250g Γ 10 = 2,500g COβ
- Savings: 2,000g COβ per job (equivalent to driving 8 miles in a gas car)
- Docker & Docker Compose
- 4GB RAM minimum
- Ports 3000, 8080, 5432, 6379 available
# Clone the repository
git clone https://github.com/Sambit-Mondal/Karbos.git
cd Karbos
# Start all services
docker-compose up -d
# Access the platform
# Dashboard: http://localhost:3000
# API: http://localhost:8080
# Metrics: http://localhost:9090/metrics# Check all services are healthy
docker-compose ps
# Test API health
curl http://localhost:8080/health
# View logs
docker-compose logs -f apicurl -X POST http://localhost:8080/api/submit \
-H "Content-Type: application/json" \
-d '{
"user_id": "demo-user",
"docker_image": "alpine:latest",
"command": ["echo", "Hello, Carbon-Aware World!"],
"deadline": "2025-12-12T00:00:00Z"
}'
Real-time COβ savings, job metrics, and 24-hour carbon intensity forecast
Live job status tracking with detailed execution logs
Regional carbon intensity map and generation mix
Worker node health, queue depth, and system performance metrics
Interactive job submission with carbon impact simulation
Karbos/
βββ client/ # Frontend Dashboard (Next.js)
β βββ app/ # Next.js 16 App Router
β βββ components/ # React components with animations
β β βββ Navigation.tsx
β β βββ tabs/
β β βββ Overview.tsx # KPIs & Eco-Curve
β β βββ Workloads.tsx # Job queue
β β βββ GridIntelligence.tsx # Carbon forecasts
β β βββ Infrastructure.tsx # Worker nodes
β β βββ Playground.tsx # Job submission
β βββ lib/ # Utilities
β βββ types/ # TypeScript definitions
β
βββ server/ # Backend API (Go)
β βββ cmd/api/ # Server entry point
β βββ internal/
β β βββ config/ # Configuration
β β βββ database/ # PostgreSQL operations
β β βββ handlers/ # HTTP handlers
β β βββ models/ # Data models
β β βββ queue/ # Redis queue
β βββ database/
β βββ schema.sql # Database schema
β
βββ docs/ # Documentation
βββ audit-logs/ # Project audit trail
|
|
| Metric | Value |
|---|---|
| Job Throughput | 1,000+ jobs/hour per worker |
| Carbon Savings | Up to 30% reduction |
| SLA Compliance | 99.9% deadline adherence |
| Latency (P95) | < 50ms API response |
| Queue Depth | 10,000+ jobs (Redis sorted set) |
| Worker Scaling | Horizontal (unlimited) |
| Data Retention | 90 days (configurable) |
| High Availability | Multi-worker failover |
POST /api/submit # Submit new job
GET /api/jobs # List all jobs
GET /api/jobs/:id # Get job details
GET /api/users/:id/jobs # Get user's jobs
GET /api/carbon-forecast # Get carbon intensity forecast
GET /api/carbon-cache # Get cached carbon data
GET /api/system/health # Infrastructure metrics
GET /health # Health check
GET /ready # Readiness probe
GET /metrics # Prometheus metrics (port 9090)Request:
curl -X POST http://localhost:8080/api/submit \
-H "Content-Type: application/json" \
-d '{
"user_id": "engineering-team",
"docker_image": "python:3.11-slim",
"command": ["python", "train_model.py"],
"deadline": "2025-12-12T18:00:00Z",
"region": "US-EAST"
}'Response:
{
"job_id": "f47ac10b-58cc-4372-a567-0e02b2c3d479",
"status": "DELAYED",
"created_at": "2025-12-11T10:30:00Z",
"scheduled_time": "2025-12-11T14:00:00Z",
"immediate": false,
"expected_intensity": 280.5,
"carbon_savings": 165.3,
"message": "Job scheduled for optimal carbon window"
}cd client
npm install
npm run dev
# Visit http://localhost:3000cd server
# Copy environment template
cp .env.example .env
# Edit .env with your credentials
# DATABASE_URL=postgresql://...
# REDIS_HOST=localhost
# Setup database
psql -d karbos -f database/schema.sql
# Start Redis
docker run -d -p 6379:6379 redis:alpine
# Run server
go run cmd/api/main.go
# Server runs on http://localhost:8080# Submit a job
curl -X POST http://localhost:8080/api/submit \
-H "Content-Type: application/json" \
-d '{
"user_id": "demo-user",
"docker_image": "python:3.11",
"deadline": "2025-12-06T18:00:00Z"
}'
# Check health
curl http://localhost:8080/health- Overview Tab: COβ savings metrics, Eco-Curve forecast, recent activity
- Workloads Tab: Job queue with status, drawer details, execution logs
- Grid Intelligence Tab: Regional carbon intensity map, generation mix
- Infrastructure Tab: Worker nodes, queue health, system metrics
- Playground Tab: Interactive job submission with simulation
| Method | Endpoint | Description |
|---|---|---|
| POST | /api/submit |
Submit new job |
| GET | /api/jobs/:id |
Get job details |
| GET | /api/users/:userId/jobs |
List user jobs |
| GET | /health |
Service health check |
| GET | /ready |
Readiness probe |
cd client
npm run build # Verify build
npm run dev # Development servercd server
go test ./... # Run tests
go build cmd/api/main.go # Build binary- id (UUID, Primary Key)
- user_id (VARCHAR)
- docker_image (VARCHAR)
- status (ENUM: PENDING, DELAYED, RUNNING, COMPLETED, FAILED)
- scheduled_time (TIMESTAMP)
- deadline (TIMESTAMP)
- created_at, started_at, completed_at- Immediate Queue:
karbos:queue:immediate(List/FIFO) - Delayed Set:
karbos:queue:delayed(Sorted Set by timestamp)
For local development without Docker:
Manual Setup Instructions
- Node.js 20+
- Go 1.23+
- PostgreSQL 16
- Redis 7
cd client
npm install
cp .env.example .env
npm run dev # http://localhost:3000cd server
cp .env.example .env
# Edit .env with your credentials
# DATABASE_URL=postgresql://user:pass@localhost:5432/karbos
# REDIS_HOST=localhost
# Initialize database
psql -d karbos -f database/schema.sql
# Run server
go run cmd/api/main.go # http://localhost:8080cd server
go run cmd/worker/main.go- ML Model Training: Schedule TensorFlow/PyTorch jobs during off-peak hours
- Data Pipelines: Run ETL jobs in low-carbon windows
- Report Generation: Delay non-urgent analytics to greener periods
- Video Encoding: Process large video files when grid is cleanest
- Image Processing: Batch resize/compress operations overnight
- Content Archival: Move cold storage during optimal carbon times
- Scientific Simulations: Schedule compute-intensive research jobs
- Genomics Processing: Delay bioinformatics workflows
- Climate Modeling: Run carbon-aware climate simulations
# Scale to 5 worker nodes
docker-compose up -d --scale worker=5
# Each worker automatically:
# - Registers with unique UUID
# - Sends heartbeat every 10s
# - Processes jobs from shared queue- Docker Swarm: Native orchestration with built-in load balancing
- Kubernetes: Deploy with Helm charts (see
k8s/directory) - Cloud Services: AWS ECS, Google Cloud Run, Azure Container Instances
We welcome contributions! Please see our Contributing Guide for details.
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'feat: add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
We follow Conventional Commits:
feat:New featuresfix:Bug fixesdocs:Documentation changesperf:Performance improvementsrefactor:Code refactoring
Karbos exports Prometheus metrics on port 9090:
# Total COβ saved (grams)
karbos_co2_saved_total
# Jobs by status
karbos_jobs_total{status="completed"}
karbos_jobs_total{status="delayed"}
# Queue depth
karbos_queue_depth{type="immediate"}
karbos_queue_depth{type="delayed"}
# Worker health
karbos_workers_active
- Non-root Containers: All services run as unprivileged users
- Network Isolation: Services communicate via internal Docker network
- Password Protection: Database and Redis require authentication
- Health Checks: Automatic restart of unhealthy containers
- Circuit Breaker: Graceful degradation on external API failures
For security issues, please email sambitmondal2005@gmail.com (do not open public issues).
This project is licensed under the Apache-2.0 License - see the LICENSE file for details.
- ElectricityMaps - Real-time carbon intensity data
- Green Software Foundation - Carbon-aware computing principles
- CNCF - Cloud native best practices
- Documentation: Readme
- Issues: GitHub Issues
- LinkedIn: @sambitm02
- GitHub: @Sambit-Mondal
Built with π for a sustainable cloud future
β Star us on GitHub - it helps!
