This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
This is a multi-framework AI agent examples repository showcasing specialized agents across Google ADK and AWS Strands frameworks. The project focuses on practical agent implementations for file operations, Jira integration, user story creation, and more.
- GoogleADK/: Google's Agent Development Kit agents
- Each agent follows pattern:
agent.py,prompts.py,__init__.py,evals/ - Agents use
google.adk.agents.Agentbase class - Tools loaded via
basic_open_agent_toolsor MCP servers
- Each agent follows pattern:
- AWS_Strands/: Strands framework agents
- Uses
strands.Agentandstrands_tools - Product_Pete agent demonstrates Atlassian MCP integration
- Uses
- claude_code_subagents/: Claude Code custom subagent templates
- Example templates for creating custom subagents in Claude Code
- Demonstrates specialized subagents for QA testing, React development, and user story creation
- These are intentional examples showing how to define and configure custom subagents
All agents follow consistent structure:
- agent.py: Main agent configuration with
create_agent()androot_agent - prompts.py: Agent-specific system prompts and instructions
- evals/: Evaluation tests (JSON test cases + test runners)
# Core dependencies only (recommended for most users)
pip install -r requirements.txt
# Full development setup
pip install -r requirements.txt
pip install -r requirements-dev.txt
pip install -r requirements-optional.txt
# Using UV (faster alternative)
uv pip install -r requirements.txt
uv pip install -r requirements-dev.txt # Optional
uv pip install -r requirements-optional.txt # Optional
# GoogleADK only
pip install -r GoogleADK/requirements.txt# Linting and formatting
python3 -m ruff check GoogleADK/ AWS_Strands/ --fix
python3 -m ruff format GoogleADK/ AWS_Strands/
# Type checking
python3 -m mypy GoogleADK/
# Run tests (limited - mainly Story_Sage has actual Python tests)
uv run pytestcd GoogleADK
adk web # Launches at http://localhost:8000# Single agent evaluation
PYTHONPATH=.:$PYTHONPATH adk eval \
--config_file_path GoogleADK/{Agent_Name}/evals/test_config.json \
--print_detailed_results \
GoogleADK/{Agent_Name} \
GoogleADK/{Agent_Name}/evals/{test_name}.json
# Example: Jira_Johnny
PYTHONPATH=.:$PYTHONPATH adk eval \
--config_file_path GoogleADK/Jira_Johnny/evals/test_config.json \
--print_detailed_results \
GoogleADK/Jira_Johnny \
GoogleADK/Jira_Johnny/evals/00_list_available_tools_test.json# Run any Strands agent with the feature-rich chat loop
python scripts/strands_chat_loop/chat_loop.py --agent AWS_Strands/Product_Pete/agent.py
python scripts/strands_chat_loop/chat_loop.py --agent AWS_Strands/Complex_Coding_Clara/agent.py
# With custom config
python scripts/strands_chat_loop/chat_loop.py --agent <agent> --config ~/.chatrc-customFeatures: Command history, token tracking, prompt templates, status bar, session summaries Docs: See scripts/strands_chat_loop/README.md
# Run agent directly (basic interface)
python AWS_Strands/Product_Pete/agent.pyWhen user says "cleanup":
- Run Quality Tools: Execute all quality checks and fix issues
python3 -m ruff check GoogleADK/ AWS_Strands/ --fix python3 -m ruff format GoogleADK/ AWS_Strands/ python3 -m mypy GoogleADK/ uv run pytest
- Review TODO Files: Update TODO.md files for outdated information
- Commit Changes: Create commit with standard message
git commit -m "Run quality checks and cleanup"
- Butler_Basil: Basic filesystem operations
- FileOps_Freddy: Advanced file operations (98.9% success)
- Jira_Johnny: Jira integration via HTTP MCP (100% success)
- Scrum_Sam: Multi-agent Scrum Master with sub-agents
- Story_Sage: User story specialist with INVEST principles
- Data_Daniel: Tool schema validation errors
- Stocks_Sarah: MCP server timeout issues
- Product_Pete: Product Manager assistant with Atlassian MCP integration
- QuickResearch_Quinten: Generic web research agent for targeted information gathering
- Environment Setup: Create
.envfile with API keys (GOOGLE_API_KEY, ANTHROPIC_API_KEY, etc.) - ADK Evaluations: Always run from project root with PYTHONPATH set
- pytest Files: For CI/CD automation only - use
adk evalfor manual testing - Local Models: Support for Ollama with Gemma models (gemma:2b, gemma:7b)