An implementation of LLMCompiler using LangGraph - an agent architecture designed to speed up agentic tasks through eager execution within a DAG while reducing costs by minimizing redundant LLM calls.
LLMCompiler has 3 main components:
- Planner: Streams a DAG of tasks to execute
- Task Fetching Unit: Schedules and executes tasks as soon as they are executable
- Joiner: Responds to the user or triggers a second plan (replanning)
User Query → Planner → Task Scheduler → Joiner → Response/Replan
↓ ↓
Tasks Parallel Execution
- Clone the repository:
cd /home/awab-ml/Downloads/LLMcompiler- Create a virtual environment:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate- Install dependencies:
pip install -r requirements.txt- Set up environment variables:
cp .env.example .env
# Edit .env with your API keys- OpenAI API Key: For LLM calls
- Tavily API Key: For search functionality
- LangSmith API Key (optional): For tracing and debugging
from src.core.graph import create_llm_compiler_graph
# Create the LLMCompiler graph
graph = create_llm_compiler_graph()
# Run a query
result = graph.invoke({
"messages": [{"role": "user", "content": "What's the GDP of New York?"}]
})
print(result["messages"][-1].content)Run the example scripts to see LLMCompiler in action:
# Simple question
python examples/simple_question.py
# Multi-hop reasoning
python examples/multi_hop.py
# Multi-step math
python examples/math_problem.pyLLMcompiler/
├── src/ # Main source code
│ ├── core/ # Core components (Planner, Scheduler, Joiner)
│ ├── tools/ # Agent tools (search, math, etc.)
│ ├── parsers/ # Output parsers
│ ├── prompts/ # Prompt templates
│ ├── models/ # Data models
│ └── utils/ # Utilities
├── examples/ # Usage examples
├── notebooks/ # Jupyter notebooks
├── tests/ # Test suite
└── docs/ # Documentation
- ✅ Parallel task execution for improved speed
- ✅ Dynamic replanning based on intermediate results
- ✅ Reduced token usage through efficient planning
- ✅ Support for complex multi-hop reasoning
- ✅ Built-in search and math tools
- ✅ Extensible tool system
Run tests:
pytest tests/Run with tracing (LangSmith):
export LANGCHAIN_TRACING_V2=true
python examples/simple_question.pyMIT License