NuGuard is an open source AI application security CLI. It can generate an AI-focused SBOM from source code, run static security analysis, validate cognitive policy documents, and red-team a live AI app with scenario-driven adversarial testing.
- Generate an AI-SBOM from a local codebase or Git repo
- Analyze the SBOM for structural AI security risks and dependency issues
- Cross-check a cognitive policy against the SBOM
- Red-team a running AI application with prompt injection, tool abuse, data exfiltration, and related attack scenarios
- Export findings in text, JSON, Markdown, and SARIF-oriented workflows
Implemented and usable today:
nuguard sbomnuguard analyzenuguard scannuguard policynuguard redteam
Present but still stubbed / not yet implemented:
nuguard seednuguard reportnuguard findingsnuguard replay
- Python 3.12+
uvfor the recommended local workflow
Optional external tools used by some analysis paths:
grypecheckovtrivysemgrep
If these tools are not installed, the corresponding checks can be skipped or may report as unavailable depending on the command path.
The steps below describe how to set up a local development environment. For production use, install the package from PyPI with: pip install nuguard
uv sync --devRun the CLI with:
uv run nuguard --helpOr, from the virtual environment:
. .venv/bin/activate
nuguard --helpnuguard sbom generate --source . --output app.sbom.jsonYou can also scan a remote repository:
nuguard sbom generate \
--from-repo https://github.com/org/repo \
--ref main \
--output app.sbom.jsonnuguard analyze --sbom app.sbom.json --format markdownTypical outputs:
markdownfor human reviewjsonfor automationsariffor code scanning pipelines
Validate policy structure:
nuguard policy validate --file cognitive-policy.mdCross-check policy against the SBOM:
nuguard policy check \
--policy cognitive-policy.md \
--sbom app.sbom.jsonnuguard redteam \
--sbom app.sbom.json \
--target http://localhost:3000 \
--format jsonFor richer red-team coverage, you can also provide:
- a cognitive policy with
--policy - canary values with
--canary - a config file with
--config
nuguard scan \
--source . \
--output-dir nuguard-reportsThis is the easiest way to run SBOM generation plus static analysis in one pass.
NuGuard supports project configuration through nuguard.yaml. A ready-to-edit example lives at nuguard.yaml.example.
Key areas in the example config:
sbom: existing SBOM pathsource: source directory for generationpolicy: cognitive policy pathllm: model settings for LLM-assisted featuresredteam: target URL, endpoint, canary file, profiles, scenario filters, and guided conversation settingsanalyze: minimum severity thresholddatabase: SQLite or Postgres-backed storage settingsoutput: output format and failure threshold
CLI flags take precedence over nuguard.yaml, which takes precedence over environment variables and built-in defaults.
NuGuard can watch for seeded canary values during dynamic testing to produce high-confidence exfiltration findings. Start from canary.example.json, create your local canary.json, seed those values into the target system, then point nuguard redteam at that file with --canary.
More detail is available in docs/redteam-engine.md.
nuguard --help
nuguard sbom --help
nuguard analyze --help
nuguard policy --help
nuguard redteam --help
nuguard scan --helpInstall dev dependencies:
make devRun tests:
make testRun linting and type checks:
make lintFormat the codebase:
make fmtThis repo includes GitHub Actions workflows for Trusted Publishing to TestPyPI and PyPI:
Before the workflows can publish, configure Trusted Publishers in TestPyPI and PyPI for the nuguard project with:
- owner/org:
NuGuardAI - repository:
nuguard-oss - workflow file:
publish-testpypi.ymlorpublish-pypi.yml - environment:
testpypiorpypi
Recommended release flow:
- Run the TestPyPI workflow manually from GitHub Actions.
- Verify the package install and CLI behavior from TestPyPI.
- Create a GitHub release to trigger the PyPI publish workflow.
- The repository currently contains example outputs and benchmark fixtures under
tests/output/ - Some red-team and benchmark tests are opt-in and gated by environment variables
- LLM-assisted features depend on provider credentials being available via environment variables
No license file is currently present in this repository. Add one before treating the project as redistributable open source.