Tiny MIT SDK for evidence-gated AI workflows: jamming, cost, approval and witness-ledger decisions before agents act.
-
Updated
May 2, 2026 - Python
Tiny MIT SDK for evidence-gated AI workflows: jamming, cost, approval and witness-ledger decisions before agents act.
Human-approval. Hardware-enforced AI safety. Physical AI intervention. Humanoid robot safety. Fail-safe AI. Human approval enforcement. Invariant condition. Quantum-resistant AI. Swarm robot safety. AI safety compliance.
AndyAI Financial Modeling Engine — governed AI-assisted finance workflows
CrewAI agents lose state on crash and have no real human approval. AXME adds both - durable state and async HITL.
Stop losing hours to blocked agents. Async human approvals with reminders, escalation, and timeout for any AI agent.
Pause Metaflow pipelines for human review with zero compute cost
Google ADK's LongRunningFunctionTool is a workaround. AXME gives your ADK agents real async human approval with reminders.
Pydantic AI doesn't have human-in-the-loop. Add async approvals with reminders and timeout in 10 lines.
Durable execution with human approval built in. What Temporal can't do in 80 lines, AXME does in 4.
OpenAI Agents SDK can pause for approval. But who reminds the human? Who escalates? AXME adds the missing pieces.
Add a description, image, and links to the human-approval topic page so that developers can more easily learn about it.
To associate your repository with the human-approval topic, visit your repo's landing page and select "manage topics."