- it can store messages, artifacts, agent skills and sandboxes...
- it provides observability over the context data.
- it supports many out-of-box context engineering methods
-
Context Storage
- Session: unified message storage for any llm, any modal.
- Context Editing - manage your context window in one api.
- Disk: save/download artifacts with file path.
- Agent Skills - empower your agents with agent skills.
- Sandbox - run code, analyze data, export artifacts.
- Session: unified message storage for any llm, any modal.
-
Context Observability
- Session Summary: asynchronously summarize agent's progress and user feedback.
- State Tracking: collect agent's working status in near real-time.
-
View everything in one dashboard
Dashboard of Agent Success Rate and Other Metrics
- Go to Acontext.io, claim your free credits.
- Go through a one-click onboarding to get your API Key:
sk-ac-xxx
π» Self-host Acontext
We have an acontext-cli to help you do quick proof-of-concept. Download it first in your terminal:
curl -fsSL https://install.acontext.io | shYou should have docker installed and an OpenAI API Key to start an Acontext backend on your computer:
mkdir acontext_server && cd acontext_server
acontext server upMake sure your LLM has the ability to call tools. By default, Acontext will use
gpt-4.1.
acontext server up will create/use .env and config.yaml for Acontext, and create a db folder to persist data.
Once it's done, you can access the following endpoints:
- Acontext API Base URL: http://localhost:8029/api/v1
- Acontext Dashboard: http://localhost:3000/
We're maintaining Python and Typescript
SDKs. The snippets below are using Python.
Click the doc link to see TS SDK Quickstart.
pip install acontextimport os
from acontext import AcontextClient
# For cloud:
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
# For self-hosted:
client = AcontextClient(
base_url="http://localhost:8029/api/v1",
api_key="sk-ac-your-root-api-bearer-token",
)Store messages in OpenAI, Anthropic, or Gemini format. Auto-converts on retrieval.
# Create session and store messages
session = client.sessions.create()
# Store text, image, file, etc.
client.sessions.store_message(
session_id=session.id,
blob={"role": "user", "content": "Hello!"},
format="openai"
)
# Retrieve in any format (auto-converts)
result = client.sessions.get_messages(session_id=session.id, format="anthropic")Compress context with summaries and edit strategies. Original messages unchanged.
# Session summary for prompt injection
summary = client.sessions.get_session_summary(session_id)
system_prompt = f"Previous tasks:\n{summary}\n\nContinue helping."
# Context editing - limit tokens on retrieval
result = client.sessions.get_messages(
session_id=session_id,
edit_strategies=[
{"type": "remove_tool_result", "params": {"keep_recent_n_tool_results": 3}},
{"type": "token_limit", "params": {"limit_tokens": 30000}}
]
)Disk Tool
Persistent file storage for agents. Supports read, write, grep, glob.
from acontext.agent.disk import DISK_TOOLS
from openai import OpenAI
disk = client.disks.create()
ctx = DISK_TOOLS.format_context(client, disk.id)
# Pass to LLM
response = OpenAI().chat.completions.create(
model="gpt-4.1",
messages=[
{"role": "system", "content": f"You have disk access.\n\n{ctx.get_context_prompt()}"},
{"role": "user", "content": "Create a todo.md with 3 tasks"}
],
tools=DISK_TOOLS.to_openai_tool_schema()
)
# Execute tool calls
for tc in response.choices[0].message.tool_calls:
result = DISK_TOOLS.execute_tool(ctx, tc.function.name, json.loads(tc.function.arguments))Sandbox Tool
Isolated code execution environment with bash, Python, and common tools.
from acontext.agent.sandbox import SANDBOX_TOOLS
from openai import OpenAI
sandbox = client.sandboxes.create()
disk = client.disks.create()
ctx = SANDBOX_TOOLS.format_context(client, sandbox.sandbox_id, disk.id)
# Pass to LLM
response = OpenAI().chat.completions.create(
model="gpt-4.1",
messages=[
{"role": "system", "content": f"You have sandbox access.\n\n{ctx.get_context_prompt()}"},
{"role": "user", "content": "Run a Python hello world script"}
],
tools=SANDBOX_TOOLS.to_openai_tool_schema()
)
# Execute tool calls
for tc in response.choices[0].message.tool_calls:
result = SANDBOX_TOOLS.execute_tool(ctx, tc.function.name, json.loads(tc.function.arguments))Sandbox with Skills
Mount reusable Agent Skills into sandbox at
/skills/{name}/. Download xlsx skill.
from acontext import FileUpload
# Upload a skill ZIP (e.g., web-artifacts-builder.zip)
with open("web-artifacts-builder.zip", "rb") as f:
skill = client.skills.create(file=FileUpload(filename="web-artifacts-builder.zip", content=f.read()))
# Mount into sandbox
ctx = SANDBOX_TOOLS.format_context(
client, sandbox.sandbox_id, disk.id,
mount_skills=[skill.id] # Available at /skills/{skill.name}/
)
# Context prompt includes skill instructions
response = OpenAI().chat.completions.create(
model="gpt-4.1",
messages=[
{"role": "system", "content": f"You have sandbox access.\n\n{ctx.get_context_prompt()}"},
{"role": "user", "content": "Create an Excel file with a simple budget spreadsheet"}
],
tools=SANDBOX_TOOLS.to_openai_tool_schema()
)
# Execute tool calls
for tc in response.choices[0].message.tool_calls:
result = SANDBOX_TOOLS.execute_tool(ctx, tc.function.name, json.loads(tc.function.arguments))You can download a full skill interactive demo with acontext-cli:
acontext create my-skill --template-path "python/interactive-agent-skill"Download end-to-end scripts with acontext:
Python
acontext create my-proj --template-path "python/openai-basic"More examples on Python:
python/openai-agent-basic: openai agent sdk templatepython/agno-basic: agno framework templatepython/openai-agent-artifacts: agent can edit and download artifacts.
Typescript
acontext create my-proj --template-path "typescript/openai-basic"More examples on Typescript:
typescript/vercel-ai-basic: agent in @vercel/ai-sdk
Note
Check our example repo for more templates: Acontext-Examples.
We're cooking more full-stack Agent Applications! Tell us what you want!
To understand what Acontext can do better, please view our docs
Star Acontext on Github to support and receive instant notifications
click to open
graph TB
subgraph "Client Layer"
PY["pip install acontext"]
TS["npm i @acontext/acontext"]
end
subgraph "Acontext Backend"
subgraph " "
API["API<br/>localhost:8029"]
CORE["Core"]
API -->|FastAPI & MQ| CORE
end
subgraph " "
Infrastructure["Infrastructures"]
PG["PostgreSQL"]
S3["S3"]
REDIS["Redis"]
MQ["RabbitMQ"]
end
end
subgraph "Dashboard"
UI["Web Dashboard<br/>localhost:3000"]
end
PY -->|RESTFUL API| API
TS -->|RESTFUL API| API
UI -->|RESTFUL API| API
API --> Infrastructure
CORE --> Infrastructure
Infrastructure --> PG
Infrastructure --> S3
Infrastructure --> REDIS
Infrastructure --> MQ
style PY fill:#3776ab,stroke:#fff,stroke-width:2px,color:#fff
style TS fill:#3178c6,stroke:#fff,stroke-width:2px,color:#fff
style API fill:#00add8,stroke:#fff,stroke-width:2px,color:#fff
style CORE fill:#ffd43b,stroke:#333,stroke-width:2px,color:#333
style UI fill:#000,stroke:#fff,stroke-width:2px,color:#fff
style PG fill:#336791,stroke:#fff,stroke-width:2px,color:#fff
style S3 fill:#ff9900,stroke:#fff,stroke-width:2px,color:#fff
style REDIS fill:#dc382d,stroke:#fff,stroke-width:2px,color:#fff
style MQ fill:#ff6600,stroke:#fff,stroke-width:2px,color:#fff
Join the community for support and discussions:
- Check our roadmap.md first.
- Read contributing.md
[](https://acontext.io)
[](https://acontext.io)This project is currently licensed under Apache License 2.0.
