OxenCode is a learning-oriented AI programming assistant built with Go. It features a unique architecture that separates agent execution from tool execution, and implements an innovative context management system that enables long-running tasks without hitting model context limits.
NOTE: This is a learning project focused on Agent engineering best practices and innovative context management strategies.
OxenCode's core innovation is the separation of agent execution and tool execution environments. This design provides:
- Enhanced Security: Tools run in isolated environments, preventing unintended side effects
- Improved Reliability: Tool failures don't crash the agent process
- Better Resource Management: Separate resource limits for agent reasoning and tool execution
Unlike traditional AI assistants that fail when context exceeds model limits, OxenCode handles extended tasks through:
- Multi-batch Context Compression: Asynchronous, user-transparent context management
- Context Hierarchies: L0 > L1 > L2 compression levels for optimal performance
- Session Isolation: Task contexts are isolated to prevent interference
- Stable Prefix Design: High cache hit rates for efficient token usage
OxenCode implements the ReAct (Reasoning + Acting) pattern for iterative problem-solving:
- Thought โ Action โ Observation cycle for complex tasks
- Stream Processing: Real-time display of AI reasoning and tool execution
- Error Recovery: Automatic retry and strategy adjustment on failures
- LLM Reasoning Display: Shows model thinking process for supported models
Built-in tools for common development tasks:
- File Operations:
Glob,Grep,Read,Write,Edit - System Commands:
Bashwith configurable timeout - Permission System: User authorization for dangerous operations
- Smart Validation: Parameter validation before tool execution
Support for multiple LLM providers:
- Anthropic: Claude (Sonnet, Haiku) with extended thinking
- OpenAI: GPT-4, GPT-4o, o1 series
- Google: Gemini 2.0 Flash, Gemini 1.5 Pro
- Qwen: Qwen-Max, Qwen-Plus, Qwen-Turbo
- DeepSeek: DeepSeek-Chat, DeepSeek-Coder, DeepSeek-Reasoner
- GLM: GLM-4 series
- Azure OpenAI, AWS Bedrock, OpenRouter, and more
- Go 1.25 or later
- An API key for your preferred LLM provider
# Clone the repository
git clone https://github.com/yourname/oxencode.git
cd oxencode
# Build the project
go build -o oxencode ./cmd/oxencode
# (Optional) Install to system
go install ./cmd/oxencode- Copy the example configuration:
mkdir -p ~/.config/oxencode
cp config.example.toml ~/.config/oxencode/config.toml- Edit
~/.config/oxencode/config.tomlwith your settings:
# Choose your provider
provider = "anthropic" # or "openai", "deepseek", "qwen", etc.
# Set your model
model = "claude-sonnet-4-5-20250514"
# Configure work directory (optional)
work_dir = "." # Current directory
# Set tool timeout (optional)
tool_timeout = 120 # seconds- Set your API key as an environment variable:
# For Anthropic Claude
export ANTHROPIC_API_KEY="your-key-here"
# For OpenAI
export OPENAI_API_KEY="your-key-here"
# For DeepSeek
export DEEPSEEK_API_KEY="your-key-here"./oxencodeYou: What files are in this directory?
OxenCode: [Uses Glob tool to list files]
You: Find all Go files that contain "error" and show me the first one.
OxenCode: [Uses Grep to find files, then Read to display content]
You: Create a simple HTTP server in Go that responds "Hello, World"
OxenCode: [Uses Write tool to create main.go with server code]
Press Esc at any time to interrupt a running task.
For detailed documentation, see:
- Architecture Overview - System design and data flow
- ReAct Loop Implementation - How the reasoning cycle works
- Tool Integration - Building and registering tools
- Tool Behavior Reference - Available tools and their usage
- TUI Testing - Testing the terminal UI
Full configuration options are documented in config.example.toml.
Key configuration areas:
- Provider Selection: Choose from 10+ LLM providers
- Model Settings: Model selection, temperature, max tokens
- Extended Thinking: Enable reasoning for supported models
- Work Directory: Where tools operate
- Tool Timeout: Safety limit for tool execution
Contributions are welcome! This is a learning project, so feel free to:
- Report bugs
- Suggest new features
- Submit pull requests
- Improve documentation
- Share your usage patterns
When contributing, please:
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
OxenCode uses a layered architecture:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Presentation Layer (TUI) โ
โ Bubble Tea UI โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Application Layer โ
โ Chat / Tool / Auth Managers โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Domain Layer โ
โ Agent (ReAct) + Tools + Permissionsโ
โ Fantasy SDK โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Infrastructure Layer โ
โ File System / Config / History โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
- Enhanced context compression algorithms
- Parallel tool execution
- Tool result caching
- Interactive debugging mode
- Plugin system for custom tools
- Multi-language support in UI
MIT License - see LICENSE file for details
- fantasy - Go SDK for LLM interaction
- Bubble Tea - Terminal UI framework
- Lipgloss - Style and formatting
- All open-source contributors making AI tools accessible
Built with โค๏ธ for learning Agent engineering