PraisonAI is a production-ready Multi-AI Agents framework with self-reflection, designed to create AI Agents to automate and solve problems ranging from simple tasks to complex challenges. By integrating PraisonAI Agents, AG2 (Formerly AutoGen), and CrewAI into a low-code solution, it streamlines the building and management of multi-agent LLM systems, emphasising simplicity, customisation, and effective human-agent collaboration.
| Feature | Code | Docs |
|---|---|---|
| π Single Agent | Example | π |
| π€ Multi Agents | Example | π |
| π€ Auto Agents | Example | π |
| π Self Reflection AI Agents | Example | π |
| π§ Reasoning AI Agents | Example | π |
| ποΈ Multi Modal AI Agents | Example | π |
| π Workflows | ||
| β³ Simple Workflow | Example | π |
| β³ Workflow with Agents | Example | π |
β³ Agentic Routing (route()) |
Example | π |
β³ Parallel Execution (parallel()) |
Example | π |
β³ Loop over List/CSV (loop()) |
Example | π |
β³ Evaluator-Optimizer (repeat()) |
Example | π |
| β³ Conditional Steps | Example | π |
| β³ Workflow Branching | Example | π |
| β³ Workflow Early Stop | Example | π |
| β³ Workflow Checkpoints | Example | π |
| π Add Custom Knowledge | Example | π |
| π§ Memory (Short & Long Term) | Example | π |
| π Chat with PDF Agents | Example | π |
| π» Code Interpreter Agents | Example | π |
| π RAG Agents | Example | π |
| π€ Async & Parallel Processing | Example | π |
| π’ Math Agents | Example | π |
| π― Structured Output Agents | Example | π |
| π LangChain Integrated Agents | Example | π |
| π Callback Agents | Example | π |
| π οΈ 100+ Custom Tools | Example | π |
| π YAML Configuration | Example | π |
| π― 100+ LLM Support | Example | π |
| π¬ Deep Research Agents | Example | π |
| π Query Rewriter Agent | Example | π |
| π Native Web Search | Example | π |
| π₯ Web Fetch (Anthropic) | Example | π |
| πΎ Prompt Caching | Example | π |
| π§ Claude Memory Tool | Example | π |
| πΎ File-Based Memory | Example | π |
| π Built-in Search Tools | Example | π |
| π Planning Mode | Example | π |
| π§ Planning Tools | Example | π |
| π§ Planning Reasoning | Example | π |
| π MCP Transports | Example | π |
| π WebSocket MCP | Example | π |
| π MCP Security | Example | π |
| π MCP Resumability | Example | π |
| β‘ Fast Context | Example | π |
| πΌοΈ Image Generation Agent | Example | π |
| π· Image to Text Agent | Example | π |
| π¬ Video Agent | Example | π |
| π Data Analyst Agent | Example | π |
| π° Finance Agent | Example | π |
| π Shopping Agent | Example | π |
| β Recommendation Agent | Example | π |
| π Wikipedia Agent | Example | π |
| π» Programming Agent | Example | π |
| π Markdown Agent | Example | π |
| π Prompt Expander Agent | Example | π |
| π Router Agent | Example | π |
| βοΈ Prompt Chaining | Example | π |
| π Evaluator Optimiser | Example | π |
| π· Orchestrator Workers | Example | π |
| β‘ Parallelisation | Example | π |
| π Repetitive Agents | Example | π |
| π€ Agent Handoffs | Example | π |
| π‘οΈ Guardrails | Example | π |
| π¬ Sessions Management | Example | π |
| β Human Approval | Example | π |
| π Stateful Agents | Example | π |
| π€ Autonomous Workflow | Example | π |
| π Rules & Instructions | Example | π |
| πͺ Hooks | Example | π |
| π Telemetry | Example | π |
| πΉ Camera Integration | Example | π |
| π Project Docs (.praison/docs/) | Example | π |
| π MCP Config Management | Example | π |
| π¬ AI Commit Messages | Example | π |
| @ @Mentions in Prompts | Example | π |
| πΎ Auto-Save Sessions | Example | π |
| π History in Context | Example | π |
| Provider | Example |
|---|---|
| OpenAI | Example |
| Anthropic | Example |
| Google Gemini | Example |
| Ollama | Example |
| Groq | Example |
| DeepSeek | Example |
| xAI Grok | Example |
| Mistral | Example |
| Cohere | Example |
| Perplexity | Example |
| Fireworks | Example |
| Together AI | Example |
| OpenRouter | Example |
| HuggingFace | Example |
| Azure OpenAI | Example |
| AWS Bedrock | Example |
| Google Vertex | Example |
| Databricks | Example |
| Cloudflare | Example |
| AI21 | Example |
| Replicate | Example |
| SageMaker | Example |
| Moonshot | Example |
| vLLM | Example |
Light weight package dedicated for coding:
pip install praisonaiagentsexport OPENAI_API_KEY=xxxxxxxxxxxxxxxxxxxxxxCreate app.py file and add the code below:
from praisonaiagents import Agent
agent = Agent(instructions="Your are a helpful AI assistant")
agent.start("Write a movie script about a robot in Mars")Run:
python app.pyCreate app.py file and add the code below:
from praisonaiagents import Agent, PraisonAIAgents
research_agent = Agent(instructions="Research about AI")
summarise_agent = Agent(instructions="Summarise research agent's findings")
agents = PraisonAIAgents(agents=[research_agent, summarise_agent])
agents.start()Run:
python app.pyEnable planning for any agent - the agent creates a plan, then executes step by step:
from praisonaiagents import Agent
def search_web(query: str) -> str:
return f"Search results for: {query}"
agent = Agent(
name="AI Assistant",
instructions="Research and write about topics",
planning=True, # Enable planning mode
planning_tools=[search_web], # Tools for planning research
planning_reasoning=True # Chain-of-thought reasoning
)
result = agent.start("Research AI trends in 2025 and write a summary")What happens:
- π Agent creates a multi-step plan
- π Executes each step sequentially
- π Shows progress with context passing
- β Returns final result
Automated research with real-time streaming, web search, and citations using OpenAI or Gemini Deep Research APIs.
from praisonaiagents import DeepResearchAgent
# OpenAI Deep Research
agent = DeepResearchAgent(
model="o4-mini-deep-research", # or "o3-deep-research"
verbose=True
)
result = agent.research("What are the latest AI trends in 2025?")
print(result.report)
print(f"Citations: {len(result.citations)}")# Gemini Deep Research
from praisonaiagents import DeepResearchAgent
agent = DeepResearchAgent(
model="deep-research-pro", # Auto-detected as Gemini
verbose=True
)
result = agent.research("Research quantum computing advances")
print(result.report)Features:
- π Multi-provider support (OpenAI, Gemini, LiteLLM)
- π‘ Real-time streaming with reasoning summaries
- π Structured citations with URLs
- π οΈ Built-in tools: web search, code interpreter, MCP, file search
- π Automatic provider detection from model name
Transform user queries to improve RAG retrieval quality using multiple strategies.
from praisonaiagents import QueryRewriterAgent, RewriteStrategy
agent = QueryRewriterAgent(model="gpt-4o-mini")
# Basic - expands abbreviations, adds context
result = agent.rewrite("AI trends")
print(result.primary_query) # "What are the current trends in Artificial Intelligence?"
# HyDE - generates hypothetical document for semantic matching
result = agent.rewrite("What is quantum computing?", strategy=RewriteStrategy.HYDE)
# Step-back - generates broader context question
result = agent.rewrite("GPT-4 vs Claude 3?", strategy=RewriteStrategy.STEP_BACK)
# Sub-queries - decomposes complex questions
result = agent.rewrite("RAG setup and best embedding models?", strategy=RewriteStrategy.SUB_QUERIES)
# Contextual - resolves references using chat history
result = agent.rewrite("What about cost?", chat_history=[...])Strategies:
- BASIC: Expand abbreviations, fix typos, add context
- HYDE: Generate hypothetical document for semantic matching
- STEP_BACK: Generate higher-level concept questions
- SUB_QUERIES: Decompose multi-part questions
- MULTI_QUERY: Generate multiple paraphrased versions
- CONTEXTUAL: Resolve references using conversation history
- AUTO: Automatically detect best strategy
Enable persistent memory for agents - works out of the box without any extra packages.
from praisonaiagents import Agent
from praisonaiagents.memory import FileMemory
# Enable memory with a single parameter
agent = Agent(
name="Personal Assistant",
instructions="You are a helpful assistant that remembers user preferences.",
memory=True, # Enables file-based memory (no extra deps!)
user_id="user123" # Isolate memory per user
)
# Memory is automatically injected into conversations
result = agent.start("My name is John and I prefer Python")
# Agent will remember this for future conversationsMemory Types:
- Short-term: Rolling buffer of recent context (auto-expires)
- Long-term: Persistent important facts (sorted by importance)
- Entity: People, places, organizations with attributes
- Episodic: Date-based interaction history
Advanced Features:
from praisonaiagents.memory import FileMemory
memory = FileMemory(user_id="user123")
# Session Save/Resume
memory.save_session("project_session", conversation_history=[...])
memory.resume_session("project_session")
# Context Compression
memory.compress(llm_func=lambda p: agent.chat(p), max_items=10)
# Checkpointing
memory.create_checkpoint("before_refactor", include_files=["main.py"])
memory.restore_checkpoint("before_refactor", restore_files=True)
# Slash Commands
memory.handle_command("/memory show")
memory.handle_command("/memory save my_session")Storage Options:
| Option | Dependencies | Description |
|---|---|---|
memory=True |
None | File-based JSON storage (default) |
memory="file" |
None | Explicit file-based storage |
memory="sqlite" |
Built-in | SQLite with indexing |
memory="chromadb" |
chromadb | Vector/semantic search |
PraisonAI auto-discovers instruction files from your project root and git root:
| File | Description | Priority |
|---|---|---|
PRAISON.md |
PraisonAI native instructions | High |
PRAISON.local.md |
Local overrides (gitignored) | Higher |
CLAUDE.md |
Claude Code memory file | High |
CLAUDE.local.md |
Local overrides (gitignored) | Higher |
AGENTS.md |
OpenAI Codex CLI instructions | High |
GEMINI.md |
Gemini CLI memory file | High |
.cursorrules |
Cursor IDE rules | High |
.windsurfrules |
Windsurf IDE rules | High |
.claude/rules/*.md |
Claude Code modular rules | Medium |
.windsurf/rules/*.md |
Windsurf modular rules | Medium |
.cursor/rules/*.mdc |
Cursor modular rules | Medium |
.praison/rules/*.md |
Workspace rules | Medium |
~/.praison/rules/*.md |
Global rules | Low |
from praisonaiagents import Agent
# Agent auto-discovers CLAUDE.md, AGENTS.md, GEMINI.md, etc.
agent = Agent(name="Assistant", instructions="You are helpful.")
# Rules are injected into system prompt automatically@Import Syntax:
# CLAUDE.md
See @README for project overview
See @docs/architecture.md for system design
@~/.praison/my-preferences.mdRule File Format (with YAML frontmatter):
---
description: Python coding guidelines
globs: ["**/*.py"]
activation: always # always, glob, manual, ai_decision
---
# Guidelines
- Use type hints
- Follow PEP 8from praisonaiagents.memory import FileMemory, AutoMemory
memory = FileMemory(user_id="user123")
auto = AutoMemory(memory, enabled=True)
# Automatically extracts and stores memories from conversations
memories = auto.process_interaction(
"My name is John and I prefer Python for backend work"
)
# Extracts: name="John", preference="Python for backend"Create reusable multi-step workflows with context passing and per-step agents:
from praisonaiagents import Agent
from praisonaiagents.memory import WorkflowManager, Workflow, WorkflowStep
# Simple execution with default agent
agent = Agent(name="Assistant", llm="gpt-4o-mini")
manager = WorkflowManager()
result = manager.execute(
"deploy",
default_agent=agent,
variables={"environment": "production"}
)
# Advanced: Per-step agent configuration
workflow = Workflow(
name="research_pipeline",
default_llm="gpt-4o-mini",
steps=[
WorkflowStep(
name="research",
action="Research {{topic}}",
agent_config={"role": "Researcher", "goal": "Find information"},
tools=["tavily_search"]
),
WorkflowStep(
name="write",
action="Write report based on {{previous_output}}",
agent_config={"role": "Writer", "goal": "Write content"},
context_from=["research"] # Only include research output
)
]
)
# Async execution
import asyncio
result = asyncio.run(manager.aexecute("deploy", default_llm="gpt-4o-mini"))Key Features:
- Context Passing: Use
{{previous_output}}and{{step_name_output}}variables - Per-Step Agents: Configure different agents with roles, goals, tools for each step
- Async Execution: Use
aexecute()for async workflows - Planning Mode: Enable at workflow level with
planning=True - Branching: Use
next_stepsandbranch_conditionfor conditional routing - Loops: Use
loop_overandloop_varto iterate over data
| Use Case | Recommended |
|---|---|
| Simple function pipelines | Workflow class β |
| Agent-only pipelines | Workflow class |
| CSV batch processing | Workflow + loop() |
| Complex task routing | Workflow + route() |
| Markdown templates | WorkflowManager |
| Early stop / conditional | Workflow class |
The easiest way to create workflows - just pass functions as steps:
from praisonaiagents import Workflow, WorkflowContext, StepResult
# Define simple handler functions
def validate(ctx: WorkflowContext) -> StepResult:
if not ctx.input:
return StepResult(output="No input", stop_workflow=True)
return StepResult(output=f"Valid: {ctx.input}")
def process(ctx: WorkflowContext) -> StepResult:
return StepResult(output=f"Processed: {ctx.previous_result}")
# Create and run workflow
workflow = Workflow(steps=[validate, process])
result = workflow.start("Hello World", verbose=True)
print(result["output"]) # "Processed: Valid: Hello World"Key Features:
- Just pass functions - No complex configuration needed
- Early stop - Return
stop_workflow=Trueto stop the workflow - Context passing - Access
ctx.input,ctx.previous_result,ctx.variables - Verbose mode - See step-by-step progress
from praisonaiagents import WorkflowStep
# Branching step
decision_step = WorkflowStep(
name="decide",
action="Evaluate if task is complete",
next_steps=["success_step", "retry_step"],
branch_condition={"success": ["success_step"], "failure": ["retry_step"]}
)
# Loop step
loop_step = WorkflowStep(
name="process_items",
action="Process {{item}}",
loop_over="items", # Variable containing list
loop_var="item" # Current item variable name
)from praisonaiagents import Workflow, WorkflowContext, StepResult
# Or use Pipeline (alias for Workflow)
from praisonaiagents import Pipeline
from praisonaiagents.workflows import route, parallel, loop, repeat
# 1. ROUTING - Decision-based branching
workflow = Workflow(steps=[
classify_request, # Returns "approve" or "reject"
route({
"approve": [approve_handler],
"reject": [reject_handler],
"default": [fallback_handler]
})
])
# 2. PARALLEL - Concurrent execution
workflow = Workflow(steps=[
parallel([research_market, research_competitors, research_customers]),
summarize_results # Gets all parallel outputs
])
# 3. LOOP - Iterate over list or CSV
workflow = Workflow(
steps=[loop(process_item, over="items")],
variables={"items": ["a", "b", "c"]}
)
# Or from CSV file:
workflow = Workflow(steps=[loop(process_row, from_csv="data.csv")])
# 4. REPEAT - Evaluator-Optimizer pattern
workflow = Workflow(steps=[
repeat(
generator,
until=lambda ctx: "done" in ctx.previous_result,
max_iterations=5
)
])
# 5. CALLBACKS - Monitor workflow execution
workflow = Workflow(
steps=[step1, step2],
on_workflow_start=lambda w, i: print(f"Starting: {i}"),
on_step_complete=lambda name, r: print(f"{name}: {r.output[:50]}"),
on_workflow_complete=lambda w, r: print(f"Done: {r['status']}")
)
# 6. GUARDRAILS - Validate and retry
def validate(result):
return ("error" not in result.output, "Fix the error")
workflow = Workflow(steps=[
WorkflowStep(name="gen", handler=generator, guardrail=validate, max_retries=3)
])Use Agent objects directly as workflow steps:
from praisonaiagents import Agent, Workflow
from praisonaiagents.workflows import route, parallel
# 1. SEQUENTIAL AGENTS
researcher = Agent(name="Researcher", role="Research expert", tools=[tavily_search])
writer = Agent(name="Writer", role="Content writer")
editor = Agent(name="Editor", role="Editor")
workflow = Workflow(steps=[researcher, writer, editor])
result = workflow.start("Research and write about AI")
# 2. PARALLEL AGENTS
workflow = Workflow(steps=[
parallel([researcher1, researcher2, researcher3]),
aggregator_agent
])
# 3. ROUTE TO AGENTS
workflow = Workflow(steps=[
classifier_function,
route({
"technical": [tech_agent],
"creative": [creative_agent],
"default": [general_agent]
})
])
# 4. WITH PLANNING & REASONING
workflow = Workflow(
steps=[researcher, writer, editor],
planning=True, # Create execution plan
planning_llm="gpt-4o", # LLM for planning
reasoning=True, # Chain-of-thought reasoning
verbose=True
)
# 5. TOOLS PER STEP
workflow = Workflow(steps=[
WorkflowStep(
name="research",
action="Research {{topic}}",
tools=[tavily_search, web_scraper],
agent_config={"name": "Researcher", "role": "Expert"}
)
])
# 6. OUTPUT TO FILE / IMAGES / PYDANTIC
from pydantic import BaseModel
class Report(BaseModel):
title: str
content: str
workflow = Workflow(steps=[
WorkflowStep(name="analyze", action="Analyze image", images=["image.jpg"]),
WorkflowStep(name="report", action="Generate report", output_pydantic=Report),
WorkflowStep(name="save", action="Save results", output_file="output/report.txt")
])
# 7. ASYNC EXECUTION
import asyncio
async def main():
result = await workflow.astart("input")
print(result)
asyncio.run(main())
# 8. STATUS TRACKING
workflow.status # "not_started" | "running" | "completed"
workflow.step_statuses # {"step1": "completed", "step2": "skipped"}
# 9. MEMORY CONFIG
workflow = Workflow(
steps=[researcher, writer],
memory_config={"provider": "chroma", "persist": True, "collection": "my_workflow"}
)
result1 = workflow.start("Research AI")
result2 = workflow.start("Continue the research") # Remembers first run# .praison/workflows/research.yaml
name: Research Workflow
description: Research and write content with all patterns
agents:
researcher:
role: Research Expert
goal: Find accurate information
tools: [tavily_search, web_scraper]
writer:
role: Content Writer
goal: Write engaging content
editor:
role: Editor
goal: Polish content
steps:
# Sequential
- agent: researcher
action: Research {{topic}}
output_variable: research_data
# Routing
- name: classifier
action: Classify content type
route:
technical: [tech_handler]
creative: [creative_handler]
default: [general_handler]
# Parallel
- name: parallel_research
parallel:
- agent: researcher
action: Research market
- agent: researcher
action: Research competitors
# Loop
- agent: writer
action: Write about {{item}}
loop_over: topics
loop_var: item
# Repeat (evaluator-optimizer)
- agent: editor
action: Review and improve
repeat:
until: "quality > 8"
max_iterations: 3
# Output to file
- agent: writer
action: Write final report
output_file: output/{{topic}}_report.md
variables:
topic: AI trends
topics: [ML, NLP, Vision]
planning: true
planning_llm: gpt-4o
memory_config:
provider: chroma
persist: trueConfigure in .praison/hooks.json:
from praisonaiagents.memory import HooksManager
hooks = HooksManager()
# Register Python hooks
hooks.register("pre_write_code", lambda ctx: print(f"Writing {ctx['file']}"))
# Execute hooks
result = hooks.execute("pre_write_code", {"file": "main.py"})pip install praisonai
export OPENAI_API_KEY=xxxxxxxxxxxxxxxxxxxxxx
praisonai --auto create a movie script about Robots in Mars# Rewrite query for better results (uses QueryRewriterAgent)
praisonai "AI trends" --query-rewrite
# Rewrite with search tools (agent decides when to search)
praisonai "latest developments" --query-rewrite --rewrite-tools "internet_search"
# Works with any prompt
praisonai "explain quantum computing" --query-rewrite -v# Default: OpenAI (o4-mini-deep-research)
praisonai research "What are the latest AI trends in 2025?"
# Use Gemini
praisonai research --model deep-research-pro "Your research query"
# Rewrite query before research
praisonai research --query-rewrite "AI trends"
# Rewrite with search tools
praisonai research --query-rewrite --rewrite-tools "internet_search" "AI trends"
# Use custom tools from file (gathers context before deep research)
praisonai research --tools tools.py "Your research query"
praisonai research -t my_tools.py "Your research query"
# Use built-in tools by name (comma-separated)
praisonai research --tools "internet_search,wiki_search" "Your query"
praisonai research -t "yfinance,calculator_tools" "Stock analysis query"
# Save output to file (output/research/{query}.md)
praisonai research --save "Your research query"
praisonai research -s "Your research query"
# Combine options
praisonai research --query-rewrite --tools tools.py --save "Your research query"
# Verbose mode (show debug logs)
praisonai research -v "Your research query"# Enable planning mode - agent creates a plan before execution
praisonai "Research AI trends and write a summary" --planning
# Planning with tools for research
praisonai "Analyze market trends" --planning --planning-tools tools.py
# Planning with chain-of-thought reasoning
praisonai "Complex analysis task" --planning --planning-reasoning
# Auto-approve plans without confirmation
praisonai "Task" --planning --auto-approve-plan# Enable memory for agent (persists across sessions)
praisonai "My name is John" --memory
# Memory with user isolation
praisonai "Remember my preferences" --memory --user-id user123
# Memory management commands
praisonai memory show # Show memory statistics
praisonai memory add "User prefers Python" # Add to long-term memory
praisonai memory search "Python" # Search memories
praisonai memory clear # Clear short-term memory
praisonai memory clear all # Clear all memory
praisonai memory save my_session # Save session
praisonai memory resume my_session # Resume session
praisonai memory sessions # List saved sessions
praisonai memory checkpoint # Create checkpoint
praisonai memory restore <checkpoint_id> # Restore checkpoint
praisonai memory checkpoints # List checkpoints
praisonai memory help # Show all commands# List all loaded rules (from PRAISON.md, CLAUDE.md, etc.)
praisonai rules list
# Show specific rule details
praisonai rules show <rule_name>
# Create a new rule
praisonai rules create my_rule "Always use type hints"
# Delete a rule
praisonai rules delete my_rule
# Show rules statistics
praisonai rules stats
# Include manual rules with prompts
praisonai "Task" --include-rules security,testing# List available workflows
praisonai workflow list
# Execute a workflow with tools and save output
praisonai workflow run "Research Blog" --tools tavily --save
# Execute with variables
praisonai workflow run deploy --workflow-var environment=staging --workflow-var branch=main
# Execute with planning mode (AI creates sub-steps for each workflow step)
praisonai workflow run "Research Blog" --planning --verbose
# Execute with reasoning mode (chain-of-thought)
praisonai workflow run "Analysis" --reasoning --verbose
# Execute with memory enabled
praisonai workflow run "Research" --memory
# Show workflow details
praisonai workflow show deploy
# Create a new workflow template
praisonai workflow create my_workflow
# Inline workflow (no template file needed)
praisonai "What is AI?" --workflow "Research,Summarize" --save
# Inline workflow with step actions
praisonai "GPT-5" --workflow "Research:Search for info,Write:Write blog" --tools tavily
# Workflow CLI help
praisonai workflow helpWorkflow CLI Options:
| Flag | Description |
|---|---|
--workflow-var key=value |
Set workflow variable (can be repeated) |
--llm <model> |
LLM model (e.g., openai/gpt-4o-mini) |
--tools <tools> |
Tools (comma-separated, e.g., tavily) |
--planning |
Enable planning mode |
--reasoning |
Enable reasoning mode |
--memory |
Enable memory |
--verbose |
Enable verbose output |
--save |
Save output to file |
# List configured hooks
praisonai hooks list
# Show hooks statistics
praisonai hooks stats
# Create hooks.json template
praisonai hooks init# Enable Claude Memory Tool (Anthropic models only)
praisonai "Research and remember findings" --claude-memory --llm anthropic/claude-sonnet-4-20250514# Validate output with LLM guardrail
praisonai "Write code" --guardrail "Ensure code is secure and follows best practices"
# Combine with other flags
praisonai "Generate SQL query" --guardrail "No DROP or DELETE statements" --save# Display token usage and cost metrics
praisonai "Analyze this data" --metrics
# Combine with other features
praisonai "Complex task" --metrics --planning# Process images with vision-based tasks
praisonai "Describe this image" --image path/to/image.png
# Analyze image content
praisonai "What objects are in this photo?" --image photo.jpg --llm openai/gpt-4o# Enable usage monitoring and analytics
praisonai "Task" --telemetry
# Combine with metrics for full observability
praisonai "Complex analysis" --telemetry --metrics# Use MCP server tools
praisonai "Search files" --mcp "npx -y @modelcontextprotocol/server-filesystem ."
# MCP with environment variables
praisonai "Search web" --mcp "npx -y @modelcontextprotocol/server-brave-search" --mcp-env "BRAVE_API_KEY=your_key"
# Multiple MCP options
praisonai "Task" --mcp "npx server" --mcp-env "KEY1=value1,KEY2=value2"# Search codebase for relevant context
praisonai "Find authentication code" --fast-context ./src
# Add code context to any task
praisonai "Explain this function" --fast-context /path/to/project# Add documents to knowledge base
praisonai knowledge add document.pdf
praisonai knowledge add ./docs/
# Search knowledge base
praisonai knowledge search "API authentication"
# List indexed documents
praisonai knowledge list
# Clear knowledge base
praisonai knowledge clear
# Show knowledge base info
praisonai knowledge info
# Show all commands
praisonai knowledge help# List all saved sessions
praisonai session list
# Show session details
praisonai session show my-project
# Resume a session (load into memory)
praisonai session resume my-project
# Delete a session
praisonai session delete my-project
# Auto-save session after each run
praisonai "Analyze this code" --auto-save my-project
# Load history from last N sessions into context
praisonai "Continue our discussion" --history 5from praisonaiagents import Agent
# Auto-save session after each run
agent = Agent(
name="Assistant",
memory=True,
auto_save="my-project"
)
# Load history from last 5 sessions
agent = Agent(
name="Assistant",
memory=True,
history_in_context=5
)from praisonaiagents.memory.workflows import WorkflowManager
manager = WorkflowManager()
# Save checkpoint after each step
result = manager.execute("deploy", checkpoint="deploy-v1")
# Resume from checkpoint
result = manager.execute("deploy", resume="deploy-v1")
# List/delete checkpoints
manager.list_checkpoints()
manager.delete_checkpoint("deploy-v1")# List all available tools
praisonai tools list
# Get info about a specific tool
praisonai tools info internet_search
# Search for tools
praisonai tools search "web"
# Show all commands
praisonai tools help# Enable agent-to-agent task delegation
praisonai "Research and write article" --handoff "researcher,writer,editor"
# Complex multi-agent workflow
praisonai "Analyze data and create report" --handoff "analyst,visualizer,writer"# Enable automatic memory extraction
praisonai "Learn about user preferences" --auto-memory
# Combine with user isolation
praisonai "Remember my settings" --auto-memory --user-id user123# Generate todo list from task
praisonai "Plan the project" --todo
# Add a todo item
praisonai todo add "Implement feature X"
# List all todos
praisonai todo list
# Complete a todo
praisonai todo complete 1
# Delete a todo
praisonai todo delete 1
# Clear all todos
praisonai todo clear
# Show all commands
praisonai todo help# Auto-select best model based on task complexity
praisonai "Simple question" --router
# Specify preferred provider
praisonai "Complex analysis" --router --router-provider anthropic
# Router automatically selects:
# - Simple tasks β gpt-4o-mini, claude-3-haiku
# - Complex tasks β gpt-4-turbo, claude-3-opus# Enable visual workflow tracking
praisonai agents.yaml --flow-display
# Combine with other features
praisonai "Multi-step task" --planning --flow-display# List all project docs
praisonai docs list
# Create a new doc
praisonai docs create project-overview "This project is a Python web app..."
# Show a specific doc
praisonai docs show project-overview
# Delete a doc
praisonai docs delete old-doc
# Show all commands
praisonai docs help# List all MCP configurations
praisonai mcp list
# Create a new MCP config
praisonai mcp create filesystem npx -y @modelcontextprotocol/server-filesystem .
# Show a specific config
praisonai mcp show filesystem
# Enable/disable a config
praisonai mcp enable filesystem
praisonai mcp disable filesystem
# Delete a config
praisonai mcp delete filesystem
# Show all commands
praisonai mcp help# Generate AI commit message for staged changes
praisonai commit
# Generate, commit, and push
praisonai commit --push# Include file content in prompt
praisonai "@file:src/main.py explain this code"
# Include project doc
praisonai "@doc:project-overview help me add a feature"
# Search the web
praisonai "@web:python best practices give me tips"
# Fetch URL content
praisonai "@url:https://docs.python.org summarize this"
# Combine multiple mentions
praisonai "@file:main.py @doc:coding-standards review this code"Expand short prompts into detailed, actionable prompts:
# Expand a short prompt into detailed prompt
praisonai "write a movie script in 3 lines" --expand-prompt
# With verbose output
praisonai "blog about AI" --expand-prompt -v
# With tools for context gathering
praisonai "latest AI trends" --expand-prompt --expand-tools tools.py
# Combine with query rewrite
praisonai "AI news" --query-rewrite --expand-promptfrom praisonaiagents import PromptExpanderAgent, ExpandStrategy
# Basic usage
agent = PromptExpanderAgent()
result = agent.expand("write a movie script in 3 lines")
print(result.expanded_prompt)
# With specific strategy
result = agent.expand("blog about AI", strategy=ExpandStrategy.DETAILED)
# Available strategies: BASIC, DETAILED, STRUCTURED, CREATIVE, AUTOKey Difference:
--query-rewrite: Optimizes queries for search/retrieval (RAG)--expand-prompt: Expands prompts for detailed task execution
# Web Search - Get real-time information
praisonai "What are the latest AI news today?" --web-search --llm openai/gpt-4o-search-preview
# Web Fetch - Retrieve and analyze URL content (Anthropic only)
praisonai "Summarize https://docs.praison.ai" --web-fetch --llm anthropic/claude-sonnet-4-20250514
# Prompt Caching - Reduce costs for repeated prompts
praisonai "Analyze this document..." --prompt-caching --llm anthropic/claude-sonnet-4-20250514from praisonaiagents import Agent
# Web Search
agent = Agent(
instructions="You are a research assistant",
llm="openai/gpt-4o-search-preview",
web_search=True
)
# Web Fetch (Anthropic only)
agent = Agent(
instructions="You are a content analyzer",
llm="anthropic/claude-sonnet-4-20250514",
web_fetch=True
)
# Prompt Caching
agent = Agent(
instructions="You are an AI assistant..." * 50, # Long system prompt
llm="anthropic/claude-sonnet-4-20250514",
prompt_caching=True
)Supported Providers:
| Feature | Providers |
|---|---|
| Web Search | OpenAI, Gemini, Anthropic, xAI, Perplexity |
| Web Fetch | Anthropic |
| Prompt Caching | OpenAI (auto), Anthropic, Bedrock, Deepseek |
PraisonAI supports MCP Protocol Revision 2025-11-25 with multiple transports.
from praisonaiagents import Agent, MCP
# stdio - Local NPX/Python servers
agent = Agent(tools=MCP("npx @modelcontextprotocol/server-memory"))
# Streamable HTTP - Production servers
agent = Agent(tools=MCP("https://api.example.com/mcp"))
# WebSocket - Real-time bidirectional
agent = Agent(tools=MCP("wss://api.example.com/mcp", auth_token="token"))
# SSE (Legacy) - Backward compatibility
agent = Agent(tools=MCP("http://localhost:8080/sse"))
# With environment variables
agent = Agent(
tools=MCP(
command="npx",
args=["-y", "@modelcontextprotocol/server-brave-search"],
env={"BRAVE_API_KEY": "your-key"}
)
)Expose your Python functions as MCP tools for Claude Desktop, Cursor, and other MCP clients:
from praisonaiagents.mcp import ToolsMCPServer
def search_web(query: str, max_results: int = 5) -> dict:
"""Search the web for information."""
return {"results": [f"Result for {query}"]}
def calculate(expression: str) -> dict:
"""Evaluate a mathematical expression."""
return {"result": eval(expression)}
# Create and run MCP server
server = ToolsMCPServer(name="my-tools")
server.register_tools([search_web, calculate])
server.run() # stdio for Claude Desktop
# server.run_sse(host="0.0.0.0", port=8080) # SSE for web clients| Feature | Description |
|---|---|
| Session Management | Automatic Mcp-Session-Id handling |
| Protocol Versioning | Mcp-Protocol-Version header |
| Resumability | SSE stream recovery via Last-Event-ID |
| Security | Origin validation, DNS rebinding prevention |
| WebSocket | Auto-reconnect with exponential backoff |
| Feature | Docs |
|---|---|
| π Query Rewrite - RAG optimization | π |
| π¬ Deep Research - Automated research | π |
| π Planning - Step-by-step execution | π |
| πΎ Memory - Persistent agent memory | π |
| π Rules - Auto-discovered instructions | π |
| π Workflow - Multi-step workflows | π |
| πͺ Hooks - Event-driven actions | π |
| π§ Claude Memory - Anthropic memory tool | π |
| π‘οΈ Guardrail - Output validation | π |
| π Metrics - Token usage tracking | π |
| πΌοΈ Image - Vision processing | π |
| π‘ Telemetry - Usage monitoring | π |
| π MCP - Model Context Protocol | π |
| β‘ Fast Context - Codebase search | π |
| π Knowledge - RAG management | π |
| π¬ Session - Conversation management | π |
| π§ Tools - Tool discovery | π |
| π€ Handoff - Agent delegation | π |
| π§ Auto Memory - Memory extraction | π |
| π Todo - Task management | π |
| π― Router - Smart model selection | π |
| π Flow Display - Visual workflow | π |
| β¨ Prompt Expansion - Detailed prompts | π |
| π Web Search - Real-time search | π |
| π₯ Web Fetch - URL content retrieval | π |
| πΎ Prompt Caching - Cost reduction | π |
npm install praisonai
export OPENAI_API_KEY=xxxxxxxxxxxxxxxxxxxxxxconst { Agent } = require('praisonai');
const agent = new Agent({ instructions: 'You are a helpful AI assistant' });
agent.start('Write a movie script about a robot in Mars');graph LR
%% Define the main flow
Start([βΆ Start]) --> Agent1
Agent1 --> Process[β Process]
Process --> Agent2
Agent2 --> Output([β Output])
Process -.-> Agent1
%% Define subgraphs for agents and their tasks
subgraph Agent1[ ]
Task1[π Task]
AgentIcon1[π€ AI Agent]
Tools1[π§ Tools]
Task1 --- AgentIcon1
AgentIcon1 --- Tools1
end
subgraph Agent2[ ]
Task2[π Task]
AgentIcon2[π€ AI Agent]
Tools2[π§ Tools]
Task2 --- AgentIcon2
AgentIcon2 --- Tools2
end
classDef input fill:#8B0000,stroke:#7C90A0,color:#fff
classDef process fill:#189AB4,stroke:#7C90A0,color:#fff
classDef tools fill:#2E8B57,stroke:#7C90A0,color:#fff
classDef transparent fill:none,stroke:none
class Start,Output,Task1,Task2 input
class Process,AgentIcon1,AgentIcon2 process
class Tools1,Tools2 tools
class Agent1,Agent2 transparent
Create AI agents that can use tools to interact with external systems and perform actions.
flowchart TB
subgraph Tools
direction TB
T3[Internet Search]
T1[Code Execution]
T2[Formatting]
end
Input[Input] ---> Agents
subgraph Agents
direction LR
A1[Agent 1]
A2[Agent 2]
A3[Agent 3]
end
Agents ---> Output[Output]
T3 --> A1
T1 --> A2
T2 --> A3
style Tools fill:#189AB4,color:#fff
style Agents fill:#8B0000,color:#fff
style Input fill:#8B0000,color:#fff
style Output fill:#8B0000,color:#fff
Create AI agents with memory capabilities for maintaining context and information across tasks.
flowchart TB
subgraph Memory
direction TB
STM[Short Term]
LTM[Long Term]
end
subgraph Store
direction TB
DB[(Vector DB)]
end
Input[Input] ---> Agents
subgraph Agents
direction LR
A1[Agent 1]
A2[Agent 2]
A3[Agent 3]
end
Agents ---> Output[Output]
Memory <--> Store
Store <--> A1
Store <--> A2
Store <--> A3
style Memory fill:#189AB4,color:#fff
style Store fill:#2E8B57,color:#fff
style Agents fill:#8B0000,color:#fff
style Input fill:#8B0000,color:#fff
style Output fill:#8B0000,color:#fff
The simplest form of task execution where tasks are performed one after another.
graph LR
Input[Input] --> A1
subgraph Agents
direction LR
A1[Agent 1] --> A2[Agent 2] --> A3[Agent 3]
end
A3 --> Output[Output]
classDef input fill:#8B0000,stroke:#7C90A0,color:#fff
classDef process fill:#189AB4,stroke:#7C90A0,color:#fff
classDef transparent fill:none,stroke:none
class Input,Output input
class A1,A2,A3 process
class Agents transparent
Uses a manager agent to coordinate task execution and agent assignments.
graph TB
Input[Input] --> Manager
subgraph Agents
Manager[Manager Agent]
subgraph Workers
direction LR
W1[Worker 1]
W2[Worker 2]
W3[Worker 3]
end
Manager --> W1
Manager --> W2
Manager --> W3
end
W1 --> Manager
W2 --> Manager
W3 --> Manager
Manager --> Output[Output]
classDef input fill:#8B0000,stroke:#7C90A0,color:#fff
classDef process fill:#189AB4,stroke:#7C90A0,color:#fff
classDef transparent fill:none,stroke:none
class Input,Output input
class Manager,W1,W2,W3 process
class Agents,Workers transparent
Advanced process type supporting complex task relationships and conditional execution.
graph LR
Input[Input] --> Start
subgraph Workflow
direction LR
Start[Start] --> C1{Condition}
C1 --> |Yes| A1[Agent 1]
C1 --> |No| A2[Agent 2]
A1 --> Join
A2 --> Join
Join --> A3[Agent 3]
end
A3 --> Output[Output]
classDef input fill:#8B0000,stroke:#7C90A0,color:#fff
classDef process fill:#189AB4,stroke:#7C90A0,color:#fff
classDef decision fill:#2E8B57,stroke:#7C90A0,color:#fff
classDef transparent fill:none,stroke:none
class Input,Output input
class Start,A1,A2,A3,Join process
class C1 decision
class Workflow transparent
Create AI agents that can dynamically route tasks to specialized LLM instances.
flowchart LR
In[In] --> Router[LLM Call Router]
Router --> LLM1[LLM Call 1]
Router --> LLM2[LLM Call 2]
Router --> LLM3[LLM Call 3]
LLM1 --> Out[Out]
LLM2 --> Out
LLM3 --> Out
style In fill:#8B0000,color:#fff
style Router fill:#2E8B57,color:#fff
style LLM1 fill:#2E8B57,color:#fff
style LLM2 fill:#2E8B57,color:#fff
style LLM3 fill:#2E8B57,color:#fff
style Out fill:#8B0000,color:#fff
Create AI agents that orchestrate and distribute tasks among specialized workers.
flowchart LR
In[In] --> Router[LLM Call Router]
Router --> LLM1[LLM Call 1]
Router --> LLM2[LLM Call 2]
Router --> LLM3[LLM Call 3]
LLM1 --> Synthesizer[Synthesizer]
LLM2 --> Synthesizer
LLM3 --> Synthesizer
Synthesizer --> Out[Out]
style In fill:#8B0000,color:#fff
style Router fill:#2E8B57,color:#fff
style LLM1 fill:#2E8B57,color:#fff
style LLM2 fill:#2E8B57,color:#fff
style LLM3 fill:#2E8B57,color:#fff
style Synthesizer fill:#2E8B57,color:#fff
style Out fill:#8B0000,color:#fff
Create AI agents that can autonomously monitor, act, and adapt based on environment feedback.
flowchart LR
Human[Human] <--> LLM[LLM Call]
LLM -->|ACTION| Environment[Environment]
Environment -->|FEEDBACK| LLM
LLM --> Stop[Stop]
style Human fill:#8B0000,color:#fff
style LLM fill:#2E8B57,color:#fff
style Environment fill:#8B0000,color:#fff
style Stop fill:#333,color:#fff
Create AI agents that can execute tasks in parallel for improved performance.
flowchart LR
In[In] --> LLM2[LLM Call 2]
In --> LLM1[LLM Call 1]
In --> LLM3[LLM Call 3]
LLM1 --> Aggregator[Aggregator]
LLM2 --> Aggregator
LLM3 --> Aggregator
Aggregator --> Out[Out]
style In fill:#8B0000,color:#fff
style LLM1 fill:#2E8B57,color:#fff
style LLM2 fill:#2E8B57,color:#fff
style LLM3 fill:#2E8B57,color:#fff
style Aggregator fill:#fff,color:#000
style Out fill:#8B0000,color:#fff
Create AI agents with sequential prompt chaining for complex workflows.
flowchart LR
In[In] --> LLM1[LLM Call 1] --> Gate{Gate}
Gate -->|Pass| LLM2[LLM Call 2] -->|Output 2| LLM3[LLM Call 3] --> Out[Out]
Gate -->|Fail| Exit[Exit]
style In fill:#8B0000,color:#fff
style LLM1 fill:#2E8B57,color:#fff
style LLM2 fill:#2E8B57,color:#fff
style LLM3 fill:#2E8B57,color:#fff
style Out fill:#8B0000,color:#fff
style Exit fill:#8B0000,color:#fff
Create AI agents that can generate and optimize solutions through iterative feedback.
flowchart LR
In[In] --> Generator[LLM Call Generator]
Generator -->|SOLUTION| Evaluator[LLM Call Evaluator] -->|ACCEPTED| Out[Out]
Evaluator -->|REJECTED + FEEDBACK| Generator
style In fill:#8B0000,color:#fff
style Generator fill:#2E8B57,color:#fff
style Evaluator fill:#2E8B57,color:#fff
style Out fill:#8B0000,color:#fff
Create AI agents that can efficiently handle repetitive tasks through automated loops.
flowchart LR
In[Input] --> LoopAgent[("Looping Agent")]
LoopAgent --> Task[Task]
Task --> |Next iteration| LoopAgent
Task --> |Done| Out[Output]
style In fill:#8B0000,color:#fff
style LoopAgent fill:#2E8B57,color:#fff,shape:circle
style Task fill:#2E8B57,color:#fff
style Out fill:#8B0000,color:#fff
export OPENAI_BASE_URL=http://localhost:11434/v1Replace xxxx with Groq API KEY:
export OPENAI_API_KEY=xxxxxxxxxxx
export OPENAI_BASE_URL=https://api.groq.com/openai/v1Create agents.yaml file and add the code below:
framework: praisonai
topic: Artificial Intelligence
roles:
screenwriter:
backstory: "Skilled in crafting scripts with engaging dialogue about {topic}."
goal: Create scripts from concepts.
role: Screenwriter
tasks:
scriptwriting_task:
description: "Develop scripts with compelling characters and dialogue about {topic}."
expected_output: "Complete script ready for production."To run the playbook:
praisonai agents.yamlfrom praisonaiagents import Agent, tool
@tool
def search(query: str) -> str:
"""Search the web for information."""
return f"Results for: {query}"
@tool
def calculate(expression: str) -> float:
"""Evaluate a math expression."""
return eval(expression)
agent = Agent(
instructions="You are a helpful assistant",
tools=[search, calculate]
)
agent.start("Search for AI news and calculate 15*4")from praisonaiagents import Agent, BaseTool
class WeatherTool(BaseTool):
name = "weather"
description = "Get current weather for a location"
def run(self, location: str) -> str:
return f"Weather in {location}: 72Β°F, Sunny"
agent = Agent(
instructions="You are a weather assistant",
tools=[WeatherTool()]
)
agent.start("What's the weather in Paris?")# pyproject.toml
[project]
name = "my-praisonai-tools"
version = "1.0.0"
dependencies = ["praisonaiagents"]
[project.entry-points."praisonaiagents.tools"]
my_tool = "my_package:MyTool"# my_package/__init__.py
from praisonaiagents import BaseTool
class MyTool(BaseTool):
name = "my_tool"
description = "My custom tool"
def run(self, param: str) -> str:
return f"Result: {param}"After pip install, tools are auto-discovered:
agent = Agent(tools=["my_tool"]) # Works automatically!Expand short prompts into detailed, actionable prompts:
# Expand a short prompt into detailed prompt
praisonai "write a movie script in 3 lines" --expand-prompt
# With verbose output
praisonai "blog about AI" --expand-prompt -v
# With tools for context gathering
praisonai "latest AI trends" --expand-prompt --expand-tools tools.py
# Combine with query rewrite
praisonai "AI news" --query-rewrite --expand-promptfrom praisonaiagents import PromptExpanderAgent, ExpandStrategy
# Basic usage
agent = PromptExpanderAgent()
result = agent.expand("write a movie script in 3 lines")
print(result.expanded_prompt)
# With specific strategy
result = agent.expand("blog about AI", strategy=ExpandStrategy.DETAILED)
# Available strategies: BASIC, DETAILED, STRUCTURED, CREATIVE, AUTOKey Difference:
--query-rewrite: Optimizes queries for search/retrieval (RAG)--expand-prompt: Expands prompts for detailed task execution
# Web Search - Get real-time information
praisonai "What are the latest AI news today?" --web-search --llm openai/gpt-4o-search-preview
# Web Fetch - Retrieve and analyze URL content (Anthropic only)
praisonai "Summarize https://docs.praison.ai" --web-fetch --llm anthropic/claude-sonnet-4-20250514
# Prompt Caching - Reduce costs for repeated prompts
praisonai "Analyze this document..." --prompt-caching --llm anthropic/claude-sonnet-4-20250514from praisonaiagents import Agent
# Web Search
agent = Agent(
instructions="You are a research assistant",
llm="openai/gpt-4o-search-preview",
web_search=True
)
# Web Fetch (Anthropic only)
agent = Agent(
instructions="You are a content analyzer",
llm="anthropic/claude-sonnet-4-20250514",
web_fetch=True
)
# Prompt Caching
agent = Agent(
instructions="You are an AI assistant..." * 50, # Long system prompt
llm="anthropic/claude-sonnet-4-20250514",
prompt_caching=True
)Supported Providers:
| Feature | Providers |
|---|---|
| Web Search | OpenAI, Gemini, Anthropic, xAI, Perplexity |
| Web Fetch | Anthropic |
| Prompt Caching | OpenAI (auto), Anthropic, Bedrock, Deepseek |
PraisonAI supports MCP Protocol Revision 2025-11-25 with multiple transports.
from praisonaiagents import Agent, MCP
# stdio - Local NPX/Python servers
agent = Agent(tools=MCP("npx @modelcontextprotocol/server-memory"))
# Streamable HTTP - Production servers
agent = Agent(tools=MCP("https://api.example.com/mcp"))
# WebSocket - Real-time bidirectional
agent = Agent(tools=MCP("wss://api.example.com/mcp", auth_token="token"))
# SSE (Legacy) - Backward compatibility
agent = Agent(tools=MCP("http://localhost:8080/sse"))
# With environment variables
agent = Agent(
tools=MCP(
command="npx",
args=["-y", "@modelcontextprotocol/server-brave-search"],
env={"BRAVE_API_KEY": "your-key"}
)
)Expose your Python functions as MCP tools for Claude Desktop, Cursor, and other MCP clients:
from praisonaiagents.mcp import ToolsMCPServer
def search_web(query: str, max_results: int = 5) -> dict:
"""Search the web for information."""
return {"results": [f"Result for {query}"]}
def calculate(expression: str) -> dict:
"""Evaluate a mathematical expression."""
return {"result": eval(expression)}
# Create and run MCP server
server = ToolsMCPServer(name="my-tools")
server.register_tools([search_web, calculate])
server.run() # stdio for Claude Desktop
# server.run_sse(host="0.0.0.0", port=8080) # SSE for web clients| Feature | Description |
|---|---|
| Session Management | Automatic Mcp-Session-Id handling |
| Protocol Versioning | Mcp-Protocol-Version header |
| Resumability | SSE stream recovery via Last-Event-ID |
| Security | Origin validation, DNS rebinding prevention |
| WebSocket | Auto-reconnect with exponential backoff |
Below is used for development only.
# Install uv if you haven't already
pip install uv
# Install from requirements
uv pip install -r pyproject.toml
# Install with extras
uv pip install -r pyproject.toml --extra code
uv pip install -r pyproject.toml --extra "crewai,autogen"# From project root - bumps version and releases in one command
python src/praisonai/scripts/bump_and_release.py 2.2.99
# With praisonaiagents dependency
python src/praisonai/scripts/bump_and_release.py 2.2.99 --agents 0.0.169
# Then publish
cd src/praisonai && uv publish- Fork on GitHub: Use the "Fork" button on the repository page.
- Clone your fork:
git clone https://github.com/yourusername/praisonAI.git - Create a branch:
git checkout -b new-feature - Make changes and commit:
git commit -am "Add some feature" - Push to your fork:
git push origin new-feature - Submit a pull request via GitHub's web interface.
- Await feedback from project maintainers.
Research & Intelligence:
- π¬ Deep Research Agents (OpenAI & Gemini)
- π Query Rewriter Agent (HyDE, Step-back, Multi-query)
- π Native Web Search (OpenAI, Gemini, Anthropic, xAI, Perplexity)
- π₯ Web Fetch (Retrieve full content from URLs - Anthropic)
- π Prompt Expander Agent (Expand short prompts into detailed instructions)
Memory & Caching:
- πΎ Prompt Caching (Reduce costs & latency - OpenAI, Anthropic, Bedrock, Deepseek)
- π§ Claude Memory Tool (Persistent cross-conversation memory - Anthropic Beta)
- πΎ File-Based Memory (Zero-dependency persistent memory for all agents)
- π Built-in Search Tools (Tavily, You.com, Exa - web search, news, content extraction)
Planning & Workflows:
- π Planning Mode (Plan before execution - Agent & Multi-Agent)
- π§ Planning Tools (Research with tools during planning)
- π§ Planning Reasoning (Chain-of-thought planning)
- βοΈ Prompt Chaining (Sequential prompt workflows with gates)
- π Evaluator Optimiser (Generate and optimize through iterative feedback)
- π· Orchestrator Workers (Distribute tasks among specialized workers)
- β‘ Parallelisation (Execute tasks in parallel for improved performance)
- π Repetitive Agents (Handle repetitive tasks through automated loops)
- π€ Autonomous Workflow (Monitor, act, adapt based on environment feedback)
Agent Types:
- πΌοΈ Image Generation Agent (Create images from text descriptions)
- π· Image to Text Agent (Extract text and descriptions from images)
- π¬ Video Agent (Analyze and process video content)
- π Data Analyst Agent (Analyze data and generate insights)
- π° Finance Agent (Financial analysis and recommendations)
- π Shopping Agent (Price comparison and shopping assistance)
- β Recommendation Agent (Personalized recommendations)
- π Wikipedia Agent (Search and extract Wikipedia information)
- π» Programming Agent (Code development and analysis)
- π Markdown Agent (Generate and format Markdown content)
- π Router Agent (Dynamic task routing with cost optimization)
MCP Protocol:
- π MCP Transports (stdio, Streamable HTTP, WebSocket, SSE - Protocol 2025-11-25)
- π WebSocket MCP (Real-time bidirectional connections with auto-reconnect)
- π MCP Security (Origin validation, DNS rebinding prevention, secure sessions)
- π MCP Resumability (SSE stream recovery via Last-Event-ID)
Safety & Control:
- π€ Agent Handoffs (Transfer context between specialized agents)
- π‘οΈ Guardrails (Input/output validation and safety checks)
- β Human Approval (Require human confirmation for critical actions)
- π¬ Sessions Management (Isolated conversation contexts)
- π Stateful Agents (Maintain state across interactions)
Developer Tools:
- β‘ Fast Context (Rapid parallel code search - 10-20x faster than traditional methods)
- π Rules & Instructions (Auto-discover CLAUDE.md, AGENTS.md, GEMINI.md)
- πͺ Hooks (Pre/post operation hooks for custom logic)
- π Telemetry (Track agent performance and usage)
- πΉ Camera Integration (Capture and analyze camera input)
- π Use CrewAI or AG2 (Formerly AutoGen) Framework
- π» Chat with ENTIRE Codebase
- π¨ Interactive UIs
- π YAML-based Configuration
- π οΈ Custom Tool Integration
- π Internet Search Capability (Tavily, You.com, Exa, DuckDuckGo, Crawl4AI)
- πΌοΈ Vision Language Model (VLM) Support
- ποΈ Real-time Voice Interaction






















