An advanced, production-ready Multi-Agent Customer Support system powered by LangGraph and DeepSeek AI. This project demonstrates a sophisticated "Supervisor-Specialist" architecture capable of autonomous routing, PII scrubbing, real-time analytics, and human-in-the-loop escalation. Built for businesses that require high-accuracy automated support with enterprise-grade security.
- 🧠 Multi-Agent Orchestration: Uses a central Supervisor Agent to dynamically route queries to specialized specialists (Order, Billing, Technical, General).
- 🛡️ Enterprise Security: Integrated PII Scrubbing engine masks sensitive data (Emails, Phone Numbers) before processing and storage.
- 📉 Real-time Token & Cost Tracking: Built-in analytics dashboard showing live $ cost estimation and token usage per session.
- 🤝 Human-in-the-Loop: Intelligent escalation logic that "pauses" the AI and locks the chat interface when a human specialist is required.
- ⚡ Streaming UI: Modern Streamlit interface with real-time response streaming for reduced perceived latency.
- 🐘 Scalable Persistence: Dual-mode database support (SQLite with WAL-mode for local dev, PostgreSQL for production scaling).
- 🔄 Fault Tolerance: Implementation of a global
error_handlernode and LLM fallback strategies.
- Core Framework: Python 3.11+, LangGraph
- Large Language Model: DeepSeek AI (OpenAI-compatible SDK)
- API Interface: FastAPI with Uvicorn
- Frontend UI: Streamlit
- Database: SQLite (Local) / PostgreSQL (Production)
- Infrastructure: Docker & Docker Compose
The system follows a Directed Acyclic Graph (DAG) workflow:
- Identify Node: Extracts customer identity and tier from context.
- Supervisor (Router): Analyzes user intent using structured Pydantic outputs and delegates to the appropriate specialist.
- Specialist Team:
- Order Specialist: Queries logistics database for tracking.
- Technical Specialist: Grounded troubleshooting via local KB.
- Billing Specialist: Handles financial inquiries and escalation triggers.
- Safety Layer: Centralized PII scrubbing and context window trimming (Sliding Window: 10 messages).
- Python 3.10 or higher
- DeepSeek API Key
- Docker (optional, for containerized run)
-
Clone the Repository
git clone https://github.com/yourusername/langgraph-support-agent.git cd langgraph-support-agent -
Install Dependencies
pip install -r requirements.txt
-
Configure Environment Create a
.envfile in the root directory:DEEPSEEK_API_KEY=your_api_key_here API_KEY=agentic_secret_key_2026 DATABASE_URL=customers.db # Use postgres:// for production LOG_LEVEL=INFO
Start the Backend API:
py -m uvicorn api:app --host 0.0.0.0 --port 8001 --reloadStart the Analytics Dashboard:
py -m streamlit run dashboard.py --server.port 8504- Chat Interface:
http://localhost:8503 - Analytics Dashboard:
http://localhost:8504 - Interactive API Docs:
http://localhost:8001/docs
The easiest way to run the production stack locally is via Docker:
docker-compose up --buildThis spins up both the FastAPI backend and the Streamlit frontend in isolated containers.
Placeholder: Add your UI screenshots here
- Sidebar Analytics: [Image]
- Specialist Switching: [Image]
- Human Takeover State: [Image]
- Vector RAG Integration: Replace keyword search with ChromaDB/Pinecone for technical documentation.
- Multi-Model Fallback: Automated switching to GPT-4o if DeepSeek latency exceeds thresholds.
- Voice Support: Twilio integration for automated IVR support.
- LangSmith Tracing: Deep observability for agentic reasoning paths.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature) - Commit your Changes (
git commit -m 'Add some AmazingFeature') - Push to the Branch (
git push origin feature/AmazingFeature) - Open a Pull Request
Distributed under the MIT License. See LICENSE for more information.
Built with ❤️ for the Agentic AI Community.