Skip to content

AdamHameed/AI-log-summarizer

Repository files navigation

ai-log-analyzer

ai-log-analyzer is a full-stack incident triage project for uploading application logs, extracting repeated failures, enriching findings with an LLM, storing analyses in PostgreSQL, and exposing Prometheus metrics.

It is designed as a compact SaaS-style monorepo that demonstrates backend architecture, frontend product thinking, observability, and practical AI integration in one portfolio-ready project.

Project Overview

Core capabilities:

  • Upload .log and .txt files through a React dashboard
  • Parse warnings, errors, stack traces, timestamps, repeated failures, and likely components
  • Store analyses and extracted issues in PostgreSQL with async SQLAlchemy
  • Enrich deterministic findings with OpenAI Responses API analysis
  • Expose Prometheus metrics for requests, uploads, and analysis lifecycle events
  • Browse historical analyses and inspect incident details in a simple dashboard

Architecture

flowchart LR
    U[User] --> F[React + Vite Frontend]
    F -->|POST /api/v1/analyses/upload| B[FastAPI Backend]
    F -->|GET analyses| B
    B --> P[Deterministic Log Parser]
    B --> L[LLM Analysis Service]
    L --> O[OpenAI Responses API]
    B --> D[(PostgreSQL)]
    B --> M[/metrics]
    M --> PR[Prometheus]
Loading

Tech Stack

  • Backend: FastAPI, Python 3.12, SQLAlchemy 2.0 async, Pydantic v2
  • Frontend: React, TypeScript, Vite, React Router
  • Database: PostgreSQL
  • AI: OpenAI Python SDK with the Responses API
  • Observability: prometheus_client
  • Infra: Docker Compose
  • Quality tooling: Pytest, Ruff, Prettier

Repository Layout

.
├── .env.example
├── .gitignore
├── .prettierignore
├── .prettierrc.json
├── Makefile
├── docker-compose.yml
├── sample_logs
│   ├── payment-service.log
│   └── worker-timeout.txt
├── backend
│   ├── Dockerfile
│   ├── pyproject.toml
│   ├── app
│   │   ├── api
│   │   ├── core
│   │   ├── models
│   │   ├── repositories
│   │   ├── schemas
│   │   └── services
│   └── tests
│       ├── test_health.py
│       └── test_log_parser.py
└── frontend
    ├── Dockerfile
    ├── package.json
    └── src
        ├── components
        └── pages

Local Setup

0. Install Python and Node.js

You will need:

  • Python 3.12
  • Node.js 22 or newer

To install Node.js:

  • macOS with Homebrew: brew install node
  • Windows with winget: winget install OpenJS.NodeJS
  • Linux with nvm:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash
nvm install 22
nvm use 22

1. Configure environment variables

cp .env.example .env

Set a valid OPENAI_API_KEY if you want the LLM enrichment step to run successfully.

2. Create a Python virtual environment

cd backend
python3 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt
pip install -e ".[dev]"

On Windows PowerShell:

cd backend
py -3 -m venv .venv
.venv\Scripts\Activate.ps1
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install -e ".[dev]"

3. Install frontend dependencies

cd frontend
npm install

4. Start the stack

make up

Or directly:

docker compose up --build

5. Open the app

  • Frontend: http://localhost:5173
  • Backend API: http://localhost:8000
  • Health: http://localhost:8000/api/v1/health
  • Metrics: http://localhost:8000/metrics

API Endpoints

Method Endpoint Description
GET /api/v1/health Health and version check
GET /api/v1/analyses List analyses ordered by creation time
GET /api/v1/analyses/{analysis_id} Fetch a single analysis and its extracted issues
POST /api/v1/analyses/upload Upload a log file for parsing and LLM analysis
GET /metrics Prometheus metrics endpoint

Prometheus

The backend exposes Prometheus metrics at http://localhost:8000/metrics.

Example scrape config:

scrape_configs:
  - job_name: "ai-log-analyzer-backend"
    metrics_path: /metrics
    static_configs:
      - targets:
          - host.docker.internal:8000

Tracked metrics include:

  • http_requests_total
  • http_request_duration_seconds
  • analyses_created_total
  • analyses_succeeded_total
  • analyses_failed_total
  • uploaded_bytes_total

Screenshots

Placeholder slots for portfolio screenshots:

  • Upload dashboard screenshot goes here
  • Analysis list screenshot goes here
  • Analysis detail screenshot goes here

Sample Logs

Two example logs are included for local testing:

Tests

Run backend tests with:

make test

Current coverage includes:

  • parser logic
  • health endpoint behavior

Formatting and Linting

Backend:

python -m pip install -e "backend[dev]"
python -m ruff check backend
python -m ruff format backend

Frontend:

cd frontend
npm install
npm run lint
npm run format

Make Commands

make up
make down
make logs
make test
make fmt

Future Improvements

  • Add Alembic migrations for schema evolution
  • Add background job processing for large uploads
  • Add authentication and multi-tenant workspaces
  • Add richer parser heuristics for more log formats
  • Add dashboard charts and trend views
  • Add CI automation for tests, linting, and container builds

How I Used AI During Development

I used AI as an engineering accelerator, not as a substitute for design decisions or review.

  • AI helped scaffold initial boilerplate for the FastAPI backend, React frontend, and Docker-based monorepo structure.
  • I used AI to iterate on schema design and refine the LLM prompt structure for incident analysis output.
  • AI was useful for generating candidate test cases, especially around parser edge cases and repeated failure patterns.
  • I compared multiple parser strategies with AI assistance before settling on the current deterministic-first approach.
  • I used AI to review edge cases, identify missing polish work, and tighten portfolio-facing documentation.

The implementation, structure, and tradeoff decisions were still reviewed and shaped manually.

About

A program which reads log files and summarizes them with AI tools.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors