-
Notifications
You must be signed in to change notification settings - Fork 1
Expand file tree
/
Copy pathCITATION.cff
More file actions
35 lines (35 loc) · 1.43 KB
/
CITATION.cff
File metadata and controls
35 lines (35 loc) · 1.43 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
cff-version: 1.2.0
message: "If you use ClampAI in your research, please cite it as below."
title: "ClampAI: Formal Safety Framework for AI Agents with Provable Guarantees"
type: software
authors:
- given-names: Ambar
affiliation: "National University of Singapore"
orcid: ""
version: "1.0.1"
date-released: "2026-02-28"
license: MIT
url: "https://github.com/Ambar-13/ClampAI"
repository-code: "https://github.com/Ambar-13/ClampAI"
description: >-
A formal safety framework for AI agents that enforces mathematical
invariants through state-transition simulation rather than prompt-based
guardrails. Provides eight proven safety theorems (T1-T8: budget safety,
termination, invariant preservation, monotone resources, atomicity, trace
integrity, rollback exactness, and emergency override) enforced by a
non-bypassable safety kernel. Ships with 25 pre-built invariant factories,
LLM adapters for Anthropic, OpenAI, OpenClaw, LangChain, LangGraph, FastAPI,
and MCP, and a clampai.testing module for writing safety property tests.
Evaluated against 39 adversarial attack scenarios across 9 threat categories
with 89.7% recall and zero false positives at sub-millisecond latency
(45,000+ checks/sec). Zero runtime dependencies.
keywords:
- ai-safety
- agent-safety-framework
- formal-verification
- autonomous-agents
- invariant-checking
- llm-safety
- mathematical-constraints
- state-machines
- execution-semantics