Version: 6.0.0
This guide helps developers quickly understand EDDI's architecture and start building agents.
EDDI is middleware for conversational AI—it sits between your app and AI services (OpenAI, Claude, etc.), providing:
- Orchestration: Control when and how LLMs are called
- Business Logic: IF-THEN rules for decision-making
- State Management: Maintain conversation history and context
- API Integration: Call external REST APIs from agent logic
Every user message goes through a pipeline of tasks:
Input → Parser → Rules → API/LLM → Output
Each task transforms the Conversation Memory (a state object containing everything about the conversation).
Agents aren't code—they're JSON configurations:
Agent (list of packages)
└─ Workflow (list of extensions)
├─ Behavior Rules (.behavior.json)
├─ HTTP Calls (.httpcalls.json)
├─ LangChain (.langchain.json)
└─ Output Templates (.output.json)
- Java 25
- Maven 3.8.4
- MongoDB 6.0+
- Docker (optional, recommended)
# Clone repo
git clone https://github.com/labsai/EDDI.git
cd EDDI
# Start EDDI + MongoDB
docker-compose up
# Access dashboard
open http://localhost:7070# Clone repo
git clone https://github.com/labsai/EDDI.git
cd EDDI
# Start MongoDB (or use Docker)
# On Mac: brew services start mongodb-community
# On Linux: sudo systemctl start mongod
# Run EDDI in dev mode
./mvnw compile quarkus:dev
# Access dashboard
open http://localhost:7070💡 Secrets Vault: If you plan to store API keys through the Manager UI or use
${eddivault:...}references, set the vault master key first:export EDDI_VAULT_MASTER_KEY=my-dev-passphrase # Linux/macOS $env:EDDI_VAULT_MASTER_KEY = "my-dev-passphrase" # Windows PowerShellWithout this, the vault is disabled and secret endpoints return HTTP 503. Any passphrase works for local dev. See Secrets Vault for full details.
If you plan to use the Web Search or Weather tools in your agents, you need to set up API keys in your environment or application.properties.
Web Search (Google):
eddi.tools.websearch.provider=googleeddi.tools.websearch.google.api-key=...eddi.tools.websearch.google.cx=...
Weather (OpenWeatherMap):
eddi.tools.weather.openweathermap.api-key=...
See LangChain Documentation for details.
Dictionaries define what users can say:
curl -X POST http://localhost:7070/regulardictionarystore/regulardictionaries \
-H "Content-Type: application/json" \
-d '{
"words": [
{
"word": "hello",
"expressions": "greeting(hello)",
"frequency": 0
},
{
"word": "hi",
"expressions": "greeting(hi)",
"frequency": 0
}
],
"phrases": []
}'Response: Dictionary ID (e.g., eddi://ai.labs.parser.dictionaries.regular/regulardictionarystore/regulardictionaries/abc123?version=1)
Rules define what the agent does:
curl -X POST http://localhost:7070/behaviorstore/behaviorsets \
-H "Content-Type: application/json" \
-d '{
"behaviorGroups": [
{
"name": "Greetings",
"behaviorRules": [
{
"name": "Welcome",
"conditions": [
{
"type": "inputmatcher",
"configs": {
"expressions": "greeting(*)",
"occurrence": "currentStep"
}
}
],
"actions": ["welcome_action"]
}
]
}
]
}'Response: Behavior set ID
curl -X POST http://localhost:7070/outputstore/outputsets \
-H "Content-Type: application/json" \
-d '{
"outputSet": [
{
"action": "welcome_action",
"timesOccurred": 0,
"outputs": [
{
"valueAlternatives": [
"Hello! How can I help you today?"
]
}
]
}
]
}'Response: Output set ID
Workflows bundle extensions together:
curl -X POST http://localhost:7070/packagestore/packages \
-H "Content-Type: application/json" \
-d '{
"packageExtensions": [
{
"type": "eddi://ai.labs.parser.dictionaries.regular",
"extensions": {
"uri": "eddi://ai.labs.regulardictionary/regulardictionarystore/regulardictionaries/abc123?version=1"
}
},
{
"type": "eddi://ai.labs.behavior",
"extensions": {
"uri": "eddi://ai.labs.behavior/behaviorstore/behaviorsets/def456?version=1"
},
"config": {
"appendActions": true
}
},
{
"type": "eddi://ai.labs.output",
"extensions": {
"uri": "eddi://ai.labs.output/outputstore/outputsets/ghi789?version=1"
}
}
]
}'Response: Workflow ID
curl -X POST http://localhost:7070/agentstore/agents \
-H "Content-Type: application/json" \
-d '{
"packages": [
"eddi://ai.labs.package/packagestore/packages/xyz123?version=1"
]
}'Response: Agent ID (e.g., agent-abc-123)
curl -X POST "http://localhost:7070/administration/production/deploy/agent-abc-123?version=1"# Start conversation
curl -X POST http://localhost:7070/agents/agent-abc-123/start \
-H "Content-Type: application/json" \
-d '{"input": "hello"}'
# Response includes conversationId
# {
# "conversationId": "conv-123",
# "conversationState": "READY",
# "conversationOutputs": [
# {"output": ["Hello! How can I help you today?"]}
# ]
# }
# Continue conversation
curl -X POST http://localhost:7070/agents/conv-123 \
-H "Content-Type: application/json" \
-d '{"input": "hi"}'curl -X POST http://localhost:7070/langchainstore/langchains \
-H "Content-Type: application/json" \
-d '{
"tasks": [
{
"actions": ["send_to_ai"],
"id": "openai_chat",
"type": "openai",
"description": "OpenAI ChatGPT integration",
"parameters": {
"apiKey": "your-openai-api-key",
"modelName": "gpt-4o",
"temperature": "0.7",
"systemMessage": "You are a helpful assistant",
"sendConversation": "true",
"addToOutput": "true"
}
}
]
}'Add this extension to your package:
{
"type": "eddi://ai.labs.llm",
"extensions": {
"uri": "eddi://ai.labs.llm/langchainstore/langchains/langchain-id?version=1"
}
}{
"name": "Ask AI",
"conditions": [
{
"type": "inputmatcher",
"configs": {
"expressions": "question(*)",
"occurrence": "currentStep"
}
}
],
"actions": ["send_to_ai"]
}Now when users ask questions, the LLM is automatically called!
Let's trace what happens when a user says "hello":
POST /agents/agent-abc-123/start
{"input": "hello"}- Validates agent ID
- Creates/loads conversation memory
- Submits to ConversationCoordinator
- Ensures sequential processing (no race conditions)
- Queues message for this conversation
Parser Task:
Input: "hello"
→ Parses using dictionary
→ Output: expressions = ["greeting(hello)"]
→ Stores in memory
Behavior Rules Task:
Reads: expressions = ["greeting(hello)"]
→ Evaluates rules
→ Rule matches: "if greeting(*) then welcome_action"
→ Output: actions = ["welcome_action"]
→ Stores in memory
Output Task:
Reads: actions = ["welcome_action"]
→ Looks up output template for "welcome_action"
→ Output: "Hello! How can I help you today?"
→ Stores in memory
- Memory saved to MongoDB
- Response returned to user
The state object passed through the pipeline:
IConversationMemory memory = ...;
// Read user input
String input = memory.getCurrentStep().getLatestData("input").getResult();
// Store parsed data
memory.getCurrentStep().storeData(
dataFactory.createData("expressions", expressions)
);
// Access conversation properties
String userName = memory.getConversationProperties().get("userName");Interface all tasks implement:
public class MyTask implements ILifecycleTask {
@Override
public void execute(IConversationMemory memory, Object component) {
// 1. Read from memory
String input = memory.getCurrentStep().getLatestData("input").getResult();
// 2. Process
String result = process(input);
// 3. Write to memory
memory.getCurrentStep().storeData(
dataFactory.createData("myResult", result)
);
}
}Ensures messages are processed in order:
// Messages for same conversation execute sequentially
coordinator.submitInOrder(conversationId, () -> {
processMessage(memory, input);
return null;
});Only call LLM for complex queries:
{
"behaviorRules": [
{
"name": "Simple Greeting",
"conditions": [
{ "type": "inputmatcher", "configs": { "expressions": "greeting(*)" } }
],
"actions": ["simple_greeting"]
},
{
"name": "Complex Question",
"conditions": [
{ "type": "inputmatcher", "configs": { "expressions": "question(*)" } }
],
"actions": ["send_to_ai"]
}
]
}Fetch data, then ask LLM to format it:
{
"behaviorRules": [
{
"name": "Weather Query",
"conditions": [
{
"type": "inputmatcher",
"configs": { "expressions": "entity(weather)" }
}
],
"actions": ["httpcall(weather-api)", "send_to_ai"]
}
]
}The LLM receives the API response in memory and can format it naturally.
Use context passed from your app:
curl -X POST http://localhost:7070/agents/agent-abc-123/start \
-H "Content-Type: application/json" \
-d '{
"input": "What is my name?",
"context": {
"userName": {"type": "string", "value": "John"},
"userId": {"type": "string", "value": "user-123"}
}
}'Access in output template:
Hello {context.userName}!
- Architecture Overview - Deep dive into design
- Behavior Rules - Master decision logic
- HTTP Calls - Integrate external APIs
- LangChain Integration - Configure LLMs
- Agent Father Deep Dive - Real-world example
Visit http://localhost:7070 to:
- Create agents visually
- Test conversations interactively
- Browse configurations
- Monitor deployments
Check the examples/ folder for:
- Weather agent (API integration)
- Support agent (multi-turn conversations)
- E-commerce agent (context management)
Create a custom lifecycle task:
@ApplicationScoped
public class MyCustomTask implements ILifecycleTask {
@Override
public String getId() {
return "ai.labs.mycompany.customtask";
}
@Override
public String getType() {
return "custom_processing";
}
@Override
public void execute(IConversationMemory memory, Object component) {
// Your logic here
}
}Register it in CDI and it becomes available as an extension!
- Check deployment status:
GET /administration/deploy/{agentId} - Check conversation state:
GET /conversationstore/conversations/{conversationId} - Check logs for errors
- Verify dictionary expressions match your input
- Check rule conditions are correct
- Use
occurrence: "anyStep"to match across conversation
- Ensure behavior rule triggers the LLM action
- Check LangChain configuration is in the package
- Verify API key is correct
- Ensure MongoDB is running
- Check connection string in config
- Use correct scope (
conversationnotstep)
- Documentation: https://github.com/labsai/EDDI/tree/main/docs
- GitHub: https://github.com/labsai/EDDI
- Issues: https://github.com/labsai/EDDI/issues
EDDI's power comes from its configurable pipeline architecture:
- Agents are JSON configurations, not code
- Everything flows through Conversation Memory
- Tasks are pluggable and reusable
- LLMs are orchestrated, not just proxied
Start simple, then add complexity as needed. The architecture scales from basic agents to sophisticated multi-API workflows.