ai-log-analyzer is a full-stack incident triage project for uploading application logs, extracting repeated failures, enriching findings with an LLM, storing analyses in PostgreSQL, and exposing Prometheus metrics.
It is designed as a compact SaaS-style monorepo that demonstrates backend architecture, frontend product thinking, observability, and practical AI integration in one portfolio-ready project.
Core capabilities:
- Upload
.logand.txtfiles through a React dashboard - Parse warnings, errors, stack traces, timestamps, repeated failures, and likely components
- Store analyses and extracted issues in PostgreSQL with async SQLAlchemy
- Enrich deterministic findings with OpenAI Responses API analysis
- Expose Prometheus metrics for requests, uploads, and analysis lifecycle events
- Browse historical analyses and inspect incident details in a simple dashboard
flowchart LR
U[User] --> F[React + Vite Frontend]
F -->|POST /api/v1/analyses/upload| B[FastAPI Backend]
F -->|GET analyses| B
B --> P[Deterministic Log Parser]
B --> L[LLM Analysis Service]
L --> O[OpenAI Responses API]
B --> D[(PostgreSQL)]
B --> M[/metrics]
M --> PR[Prometheus]
- Backend: FastAPI, Python 3.12, SQLAlchemy 2.0 async, Pydantic v2
- Frontend: React, TypeScript, Vite, React Router
- Database: PostgreSQL
- AI: OpenAI Python SDK with the Responses API
- Observability:
prometheus_client - Infra: Docker Compose
- Quality tooling: Pytest, Ruff, Prettier
.
├── .env.example
├── .gitignore
├── .prettierignore
├── .prettierrc.json
├── Makefile
├── docker-compose.yml
├── sample_logs
│ ├── payment-service.log
│ └── worker-timeout.txt
├── backend
│ ├── Dockerfile
│ ├── pyproject.toml
│ ├── app
│ │ ├── api
│ │ ├── core
│ │ ├── models
│ │ ├── repositories
│ │ ├── schemas
│ │ └── services
│ └── tests
│ ├── test_health.py
│ └── test_log_parser.py
└── frontend
├── Dockerfile
├── package.json
└── src
├── components
└── pages
You will need:
- Python 3.12
- Node.js 22 or newer
To install Node.js:
- macOS with Homebrew:
brew install node - Windows with
winget:winget install OpenJS.NodeJS - Linux with
nvm:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash
nvm install 22
nvm use 22cp .env.example .envSet a valid OPENAI_API_KEY if you want the LLM enrichment step to run successfully.
cd backend
python3 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt
pip install -e ".[dev]"On Windows PowerShell:
cd backend
py -3 -m venv .venv
.venv\Scripts\Activate.ps1
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install -e ".[dev]"cd frontend
npm installmake upOr directly:
docker compose up --build- Frontend:
http://localhost:5173 - Backend API:
http://localhost:8000 - Health:
http://localhost:8000/api/v1/health - Metrics:
http://localhost:8000/metrics
| Method | Endpoint | Description |
|---|---|---|
GET |
/api/v1/health |
Health and version check |
GET |
/api/v1/analyses |
List analyses ordered by creation time |
GET |
/api/v1/analyses/{analysis_id} |
Fetch a single analysis and its extracted issues |
POST |
/api/v1/analyses/upload |
Upload a log file for parsing and LLM analysis |
GET |
/metrics |
Prometheus metrics endpoint |
The backend exposes Prometheus metrics at http://localhost:8000/metrics.
Example scrape config:
scrape_configs:
- job_name: "ai-log-analyzer-backend"
metrics_path: /metrics
static_configs:
- targets:
- host.docker.internal:8000Tracked metrics include:
http_requests_totalhttp_request_duration_secondsanalyses_created_totalanalyses_succeeded_totalanalyses_failed_totaluploaded_bytes_total
Placeholder slots for portfolio screenshots:
- Upload dashboard screenshot goes here
- Analysis list screenshot goes here
- Analysis detail screenshot goes here
Two example logs are included for local testing:
Run backend tests with:
make testCurrent coverage includes:
- parser logic
- health endpoint behavior
Backend:
python -m pip install -e "backend[dev]"
python -m ruff check backend
python -m ruff format backendFrontend:
cd frontend
npm install
npm run lint
npm run formatmake up
make down
make logs
make test
make fmt- Add Alembic migrations for schema evolution
- Add background job processing for large uploads
- Add authentication and multi-tenant workspaces
- Add richer parser heuristics for more log formats
- Add dashboard charts and trend views
- Add CI automation for tests, linting, and container builds
I used AI as an engineering accelerator, not as a substitute for design decisions or review.
- AI helped scaffold initial boilerplate for the FastAPI backend, React frontend, and Docker-based monorepo structure.
- I used AI to iterate on schema design and refine the LLM prompt structure for incident analysis output.
- AI was useful for generating candidate test cases, especially around parser edge cases and repeated failure patterns.
- I compared multiple parser strategies with AI assistance before settling on the current deterministic-first approach.
- I used AI to review edge cases, identify missing polish work, and tighten portfolio-facing documentation.
The implementation, structure, and tradeoff decisions were still reviewed and shaped manually.