Skip to content
#

deliberative-ai

Here is 1 public repository matching this topic...

moralstack

MoralStack is a governance and safety layer for LLM applications. It analyzes user requests before generation, evaluates risk and intent, and decides whether the AI should answer normally, answer safely, or refuse. The goal is to make AI systems more auditable, controllable, and reliable in sensitive or regulated contexts.

  • Updated Apr 14, 2026
  • Python

Improve this page

Add a description, image, and links to the deliberative-ai topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the deliberative-ai topic, visit your repo's landing page and select "manage topics."

Learn more