Open-source governance and compliance framework for AI agents — cross-provider policy enforcement, audit trails, and standards mapping across OWASP MCP Top 10, NIST AI RMF, and EU AI Act
-
Updated
Mar 20, 2026
Open-source governance and compliance framework for AI agents — cross-provider policy enforcement, audit trails, and standards mapping across OWASP MCP Top 10, NIST AI RMF, and EU AI Act
Trust your agents in production. Turn what your agent handles and what you need to prove into automatic agent runtime security controls. Scale compliance to your agents automatically.
ACR Control Plane: runtime control & governance for agentic AI (six-pillar enforcement).
W3C DID Method Specification for TRAIL — Trust Registry for AI Identity Layer
Fraud risk scoring engine for autonomous AI agents. Detects behavioral anomalies, delegation abuse, and coordinated agent activity.
learning path for AI trust.
Inverse Turing test for AI agents. Procedurally generated challenges that prove substrate, autonomy, and intent — things a human can't fake. Self-hosted, open source, MIT licensed.
Rust CLI tool to submit LLM prompts and receive a ZK-proof that the output was generated by a specific model.
Add a description, image, and links to the ai-trust topic page so that developers can more easily learn about it.
To associate your repository with the ai-trust topic, visit your repo's landing page and select "manage topics."