| Requirement | Version | Installation |
|---|---|---|
| Python | 3.11+ | python.org |
| Node.js | 18+ | nodejs.org |
| PostgreSQL | 14+ | postgresql.org |
| Redis | 7+ | redis.io |
| Docker | 20+ | docker.com |
| LiveKit Server | Latest | livekit.io |
git clone <repository-url>
cd InterviewLabcd src
# Create virtual environment
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
# Install dependencies
pip install -r ../requirements.txt # Or use uv/pip-tools
# Create .env file
cp .env.example .envRequired Environment Variables:
# Database
DATABASE_URL=postgresql+asyncpg://user:password@localhost:5432/interviewlab
# Redis
REDIS_URL=redis://localhost:6379/0
# Security
SECRET_KEY=your-secret-key-here
# OpenAI
OPENAI_API_KEY=sk-...
# LiveKit
LIVEKIT_URL=wss://localhost:7880
LIVEKIT_API_KEY=your-api-key
LIVEKIT_API_SECRET=your-api-secret# Create database
createdb interviewlab
# Run migrations
alembic upgrade headcd frontend
# Install dependencies
npm install
# Create .env.local
cp .env.example .env.localFrontend Environment Variables:
NEXT_PUBLIC_API_URL=http://localhost:8000
NEXT_PUBLIC_LIVEKIT_URL=ws://localhost:7880Option 1: Docker (Recommended)
docker run -d \
-p 7880:7880 \
-p 7881:7881 \
-p 7882:7882/udp \
-e LIVEKIT_KEYS="api-key: api-secret" \
livekit/livekit-serverOption 2: Binary
# Download from livekit.io
./livekit-server --devTerminal 1: Backend API
cd src
uvicorn main:app --reload --port 8000Terminal 2: Frontend
cd frontend
npm run devTerminal 3: LiveKit Agent
cd src
python -m src.agents.interview_agent devTerminal 4: Redis (if not running)
redis-server| Service | URL |
|---|---|
| Frontend | http://localhost:3000 |
| Backend API | http://localhost:8000 |
| API Docs | http://localhost:8000/docs |
| LiveKit Server | ws://localhost:7880 |
-
Backend Changes
- Edit Python files in
src/ - API auto-reloads (uvicorn --reload)
- Run tests:
pytest
- Edit Python files in
-
Frontend Changes
- Edit files in
frontend/ - Hot reload enabled
- Type checking:
npm run type-check
- Edit files in
-
Orchestrator Changes
- Edit files in
src/services/orchestrator/ - Restart agent to pick up changes
- Check logs for errors
- Edit files in
# Create migration
alembic revision --autogenerate -m "description"
# Apply migration
alembic upgrade head
# Rollback
alembic downgrade -1Backend Tests:
pytest tests/
pytest tests/ -v # Verbose
pytest tests/ -k test_name # Specific testFrontend Tests:
npm test
npm run test:watchEnable Debug Logging:
# src/core/logging.py
LOG_LEVEL = "DEBUG"View Logs:
tail -f logs/interviewlab.logDatabase Inspection:
psql interviewlab
SELECT * FROM interviews;Enable Agent Logs:
export LIVEKIT_LOG=debug
python -m src.agents.interview_agent devCheck Agent State:
- Logs show state transitions
- Check
src/services/logging/interview_logger.pyoutput
React DevTools:
- Install browser extension
- Inspect component state
Network Inspection:
- Chrome DevTools → Network tab
- Check API requests/responses
| Issue | Solution |
|---|---|
| Port already in use | Change port or kill process: lsof -ti:8000 | xargs kill |
| Database connection error | Verify PostgreSQL running: pg_isready |
| Redis connection error | Start Redis: redis-server |
| Agent won't connect | Check LiveKit server running, verify credentials |
| Import errors | Activate venv, reinstall dependencies |
| Migration conflicts | Reset DB: alembic downgrade base && alembic upgrade head |
sequenceDiagram
participant F as Frontend
participant A as API
participant D as Database
participant AG as Agent
participant O as Orchestrator
F->>A: POST interviews 123 submit-code
A->>D: Save code to interview
A->>D: Update conversation_history
A->>F: 200 OK
Note over F,AG: User speaks I submitted code
F->>AG: Audio stream
AG->>O: execute_step code
O->>O: route_from_ingest to code_review
O->>O: Execute code in sandbox
O->>O: Analyze code quality
O->>D: Save results
O->>AG: Code review response
AG->>F: TTS audio
Code is persisted to the database first, then the user's voice message triggers the orchestrator with current_code set. The route_from_ingest function detects code presence and routes directly to code_review, bypassing intent detection. The sandbox service executes code in isolated Docker containers, and get_code_metrics analyzes quality using AST parsing. Results are appended to code_submissions via reducer, ensuring atomic updates even with concurrent state modifications.
View Interview State:
# In Python shell
from src.services.data.state_manager import interview_to_state
from src.models.interview import Interview
from sqlalchemy import select
# Load interview
interview = await db.execute(select(Interview).where(Interview.id == 123))
state = interview_to_state(interview.scalar_one())
print(state)Check LangGraph State:
- Agent logs show state transitions
- Database checkpoints contain full state
- Redis cache (if enabled) shows current state
Load Testing:
# Install locust
pip install locust
# Run load test
locust -f tests/load_test.pyMonitor Resources:
# CPU/Memory
htop
# Database connections
psql -c "SELECT count(*) FROM pg_stat_activity;"
# Redis memory
redis-cli info memory