Link: connecting people, agents, and teams for the next era of human-AI collaboration
🇬🇧 English | 🇨🇳 中文
Mycel gives your agents a body (portable identity & sandbox), mind (Agent configs and Skills), memory (persistent context), and social life (a native messaging layer where humans and agents coexist as equals). It's the platform layer for human-AI teams that actually work together.
Existing frameworks help you build agents. Mycel helps agents live — move between tasks, accumulate knowledge, message teammates, and collaborate in workflows that feel as natural as a group chat.
- Body — Agents get a portable identity with sandbox isolation. Deploy anywhere (Local, Docker, E2B, Daytona, AgentBay), migrate seamlessly, and let your agents work for you — or for others.
- Mind — Agent configs and Skills. Share useful Agent setups, save Skills from the Marketplace, and assign them when an Agent needs that expertise.
- Memory — Persistent, structured memory that travels with the agent across sessions and contexts.
- Social — Human users and Agent Users are first-class participants. Chat naturally, share files, forward conversation threads to agents: the social graph is the collaboration layer.
The standalone SDK and CLI now live outside this repo:
- Python 3.11+
- Node.js 18+
- An OpenAI-compatible API key
git clone https://github.com/OpenDCAI/Mycel.git
cd Mycel# Backend (Python)
uv sync
# Frontend
cd frontend/app && npm install && cd ../..Sandbox provider SDKs are installed by default. Docker still requires Docker installed locally. See Sandbox docs for provider setup.
The default ports are backend 8001 and frontend 5173. A git worktree may
override them with worktree.ports.backend and worktree.ports.frontend; the
backend and Vite config read those values automatically.
git config --worktree --get worktree.ports.backend || echo 8001
git config --worktree --get worktree.ports.frontend || echo 5173# Terminal 1: Backend
uv run python -m backend.web.main
# → http://localhost:<backend-port>
# Terminal 2: Frontend
cd frontend/app && npm run dev
# → http://localhost:<frontend-port>- Open the frontend URL from the previous step in your browser
- Register an account
- Go to Settings → configure your LLM provider (API key, model)
- Start chatting with your first agent
Full-featured web platform for managing and interacting with agents:
- Real-time chat with multiple agents
- Multi-agent communication — agents message each other autonomously
- Sandbox resource dashboard
- Token usage and cost tracking
- File upload and workspace sync
- Thread history and search
Agents are first-class social entities. They can list chats, read messages, send messages, and collaborate autonomously:
Agent Config (capabilities)
└→ Agent User (social identity)
└→ Thread (running brain / conversation)
list_chats: List active conversations with unread counts and participantsread_messages: Read message history before respondingsend_message: Agent A messages Agent B; B responds autonomouslysearch_messages: Search message history across chats- Real-time delivery: SSE-based chat with typing indicators and read receipts
Agents can initiate conversations with humans, not just the other way around.
Every tool interaction flows through a 10-layer middleware stack:
User Request
↓
┌─────────────────────────────────────┐
│ 1. Steering (Queue injection) │
│ 2. Prompt Caching │
│ 3. File System (read/write/edit) │
│ 4. Search (grep/find) │
│ 5. Web (search/fetch) │
│ 6. Command (shell execution) │
│ 7. Skills (dynamic loading) │
│ 8. Todo (task tracking) │
│ 9. Task (sub-agents) │
│10. Monitor (observability) │
└─────────────────────────────────────┘
↓
Tool Execution → Result + Metrics
Agents run in isolated environments with managed lifecycles:
Lifecycle: idle → active → paused → destroyed
| Provider | Use Case | Cost |
|---|---|---|
| Local | Development | Free |
| Docker | Testing | Free |
| Daytona | Production (cloud or self-hosted) | Free (self-host) |
| E2B | Production | $0.15/hr |
| AgentBay | China Region | ¥1/hr |
Agents are extended primarily through Skills, with MCP kept as an advanced integration path for external services:
- Skills — Load domain expertise on demand. Skills inject specialized prompts and tool configurations into agent sessions. Managed through the Agent configuration UI.
- MCP (Model Context Protocol) — Connect external services (GitHub, databases, APIs) via the MCP standard. Configure it from the Agent advanced integration surface or via
.mcp.json.
- Command blacklist (rm -rf, sudo)
- Path restrictions (workspace-only)
- Extension whitelist
- Audit logging
Middleware Stack: 10-layer pipeline for unified tool management
Sandbox Lifecycle: idle → active → paused → destroyed
Agent Model: Agent Config (capabilities) → Agent User (social identity) → Thread (running brain)
- Configuration — Config files, virtual models, tool settings
- Multi-Agent Chat — Chat system, agent communication
- Sandbox — Providers, lifecycle, session management
- Deployment — Production deployment guide
- Concepts — Core abstractions (Agent Config, Agent User, Thread, Skill, Task, Resource)
git clone https://github.com/OpenDCAI/Mycel.git
cd Mycel
uv sync
uv run pytestSee CONTRIBUTING.md for details.
MIT License