Cryptographic audit receipts for AI coding agents. Ed25519 + Merkle + RFC 3161 TSA. Supports Claude Code & Cursor.
-
Updated
Apr 16, 2026 - HTML
Cryptographic audit receipts for AI coding agents. Ed25519 + Merkle + RFC 3161 TSA. Supports Claude Code & Cursor.
ATLAST Protocol — The Trust Layer for the Agent Economy. Make AI agent work verifiable with Evidence Chain Protocol (ECP). Open source · MIT License · weba0.com
Append-only event kernel with Ed25519-signed Merkle checkpoints. Every AI action gets a verifiable receipt.
Cryptographic receipt system for AI agent accountability. Tamper-evident, hash-chained receipts with Ed25519/HMAC signing.
Measurement infrastructure for multi-turn AI interaction safety evaluation
AISS v2.0.0 — standalone release
A shows bottlenecks in human only workflows while B is for agentic and HITL workflows to ensure accountability and to prevent automation bias
Eziokwu: Heart-centered AI accountability framework for verifying algorithmic decision-making and organizing evidence for regulatory evaluation. Truth infrastructure built on Igbo philosophical principles.
Official CLG wrapper for Model Context Protocol: tamper-evident decision and outcome receipts and real-time mandate enforcement for MCP tool calls.
∈ Principle — A philosophical and institutional proposal for public authorship and responsibility in the age of AI. Foundational text with DOI (Zenodo). CC BY 4.0.
OpenExecution Provenance Specification — implements AEGIS (Agent Execution Governance and Integrity Standard) for auditable, tamper-evident AI agent behavioral records. Apache 2.0.
Gamified accountability system for Claude Code workflows with progressive consequences, strikes, and rewards. Based on ArXiv 2506.01347 NSR research.
Neutral reference framework for institutional accountability and post-incident review in high-risk autonomous AI systems.
Practical and research-oriented exploration of ethics, responsibility, and governance for AI in software engineering. Policy frameworks, case studies, assessment tools and actionable guidance for responsible AI adoption in engineering teams. Week 07 assignment for AI & SE learning track.
Bolt-on Python SDK for tamper-evident AI decision records. ADR Specification v0.1 + Reasoning Capture Methodology v1.0.
Post-deployment behavioral measurement framework for AI agents — traces failures, quantifies preventable waste, maps correction persistence, and produces governance-ready evidence from real production sessions
Open reference implementation and verification toolkit for deterministic AI decision accountability.
Structural reference on retaining final human refusal power before irreversible autonomous execution.
A deterministic safety layer for probabilistic AI systems — preventing delusion reinforcement and AI-induced psychological harm through immutable governance
Add a description, image, and links to the ai-accountability topic page so that developers can more easily learn about it.
To associate your repository with the ai-accountability topic, visit your repo's landing page and select "manage topics."