Turn what your agent handles and what you need to prove into active runtime controls.
Ancilis is a Python-first compliance and trust intelligence SDK for AI agents.
Instead of manually choosing controls, analyzing frameworks, building crosswalks, and chasing evidence after the fact, Ancilis uses two inputs to decide what should apply:
- the data your agent handles
- the certifications or trust standards you want to target
From there, Ancilis evaluates agent actions at runtime, activates the right controls automatically, and records audit-ready evidence as the agent runs.
No manual control selection. No framework analysis. No crosswalking spreadsheets. No waiting for review cycles to know where you stand.
Most AI agent security tools stop at runtime policy enforcement.
That matters, but enterprise teams still get stuck with the harder problem:
- Which controls actually apply to this agent?
- Which frameworks matter based on the data it touches?
- What evidence do we have right now?
- Are we closer to certification, or just collecting logs?
Ancilis is built for that next step.
It turns runtime security into automatic compliance control activation, continuous evidence generation, and certification readiness.
Ancilis is not just another runtime security layer.
It is a runtime control and evidence system that lets you declare business reality and let the platform do the compliance work:
- declare
health_recordsand activate HIPAA, GDPR, and SOC 2 overlays automatically - declare
credit_cardsand activate PCI-DSS controls automatically - declare
ai_training_dataand activate ISO 42001 and EU AI Act overlays automatically - declare
aiuc-1as a certification target and generate readiness reporting automatically - switch from
audittoenforcewhen you are ready to block violations before execution
This means you do not start with a spreadsheet of frameworks. You start with what your agent is, what it touches, and what you need to prove.
What your agent handles + what you need to certify
↓
Automatic control and overlay activation
↓
Runtime evaluation of tool calls and actions
↓
Tamper-evident evidence written locally
↓
Status, reports, and certification readiness output
Ancilis works with:
- MCP clients and middleware
- CLI agents
- explicit HTTP wrappers
- plain Python tool functions
Install Ancilis:
pip install ancilis
# optional MCP support
pip install "ancilis[mcp]"Create ancilis.yaml:
agent:
name: my-agent
security:
mode: audit
tools:
allowed:
- search_docs
my_agent_handles:
- health_records
- personal_info
certification_targets:
- aiuc-1Wrap a tool:
from ancilis import ToolActionProducer, load_config
from ancilis.engine import Engine
config = load_config()
engine = Engine(config)
producer = ToolActionProducer(config=config, engine=engine)
def search_docs(query: str) -> str:
return f"Found 3 results for: {query}"
search_docs = producer.wrap_tool(search_docs, tool_name="search_docs")
result = search_docs("account billing")
print(result)Check your current posture:
ancilis statusExample output:
Ancilis — my-agent
Mode: audit
Controls: active and evaluating
Tool calls: 1 evaluated, 0 blocked
AIUC-1: active
HIPAA Security Rule: active
GDPR: active
SOC 2 Type II: active
| You declare | Ancilis activates |
|---|---|
my_agent_handles: [health_records] |
HIPAA, GDPR, SOC 2 overlays |
my_agent_handles: [credit_cards] |
PCI-DSS overlay |
my_agent_handles: [ai_training_data] |
ISO 42001 and EU AI Act overlays |
certification_targets: [aiuc-1] |
AIUC-1 readiness reporting |
security.mode: enforce |
violations blocked before execution |
This is the core idea:
You should not have to manually select controls for every agent. You should not have to interpret every framework from scratch. You should not have to wait for an annual review to know whether your posture is drifting.
Every evaluation produces audit-ready evidence.
Ancilis records:
- what action was attempted
- which control evaluated it
- whether it passed or failed
- why it passed or failed
- when it happened
- the evidence chain linking that event to the rest of the record
Evidence is stored locally in DuckDB with cryptographic hash chaining so you can inspect, report, and retain it without a hosted control plane.
- Automatic compliance activation for agents that handle regulated data
- Certification readiness for teams targeting AIUC-1 and similar trust signals
- Security reviews for enterprise buyers that want proof of runtime controls and evidence
- Continuous posture tracking between audit cycles
- Safer MCP and tool usage for agents that call external systems
- A compliance-ready control layer for internal copilots and production agents
Add a certification target:
certification_targets:
- aiuc-1Generate readiness output:
ancilis report --format aiuc1-readinessThis lets teams move from “we think we are covered” to a concrete, evidence-backed readiness view without building framework crosswalks by hand.
Declare what your agent handles:
my_agent_handles:
- health_records
- personal_infoAncilis activates the relevant overlays automatically and extends evidence requirements where needed.
That means your compliance posture follows the agent’s real data exposure, not a static spreadsheet.
Ancilis still does the runtime work:
- evaluates tool calls deterministically
- supports audit and enforce modes
- records every decision as evidence
- works across multiple producer types
But the differentiated value is not just “runtime security.”
The differentiated value is that runtime evaluation becomes the engine for:
- automatic control selection
- automatic overlay activation
- certification readiness
- continuous evidence generation
- lower compliance overhead for every new agent
examples/certification-driven— certification target to readiness reportingexamples/data-classification— data declaration to automatic overlaysexamples/mcp-middleware— MCP tool-call evaluation in audit or enforce modeexamples/cli-agent— command evaluation and blocking
| Command | What it does |
|---|---|
ancilis status |
current posture in plain language |
ancilis status --verbose |
per-control detail with activation sources |
ancilis config validate |
validates config with actionable errors |
ancilis report |
terminal posture report |
ancilis report --format markdown |
markdown report for review |
ancilis report --format aiuc1-readiness |
AIUC-1 readiness report |
ancilis report --format pdf |
PDF report for procurement or audit |
ancilis doctor |
setup diagnostics and next steps |
- Python is the primary supported path
- TypeScript is preview
- HTTP support is explicit wrapping, not universal interception
- evidence is tamper-evident, not tamper-proof
- some controls and overlays are deeper than others today
Ancilis is for teams building AI agents that need to answer questions like:
- What controls apply to this agent?
- What evidence do we have right now?
- What changes when the agent touches regulated data?
- What do we need for certification or procurement?
- Can we prove the agent is operating inside approved boundaries?
- Security disclosures:
security@ancilis.ai - Contributions welcome under the project license
- Licensed under Business Source License 1.1
