โฆ โฆ
โง โโโโโโโโโโโโโโโ โโโโ โโโ โโโโโโโโโโ โโโ โง
ยท โโโโโโโโโโโโโโโโโโโโโ โโโ โโโโโโโโโโโ โโโ ยท
โโโโโโโโโโโโโโโโโโโโโโ โโโ โโโ โโโ โโโ
ยท โโโโโโโโโโโโโโโ โโโโโโโโโโ โโโ โโโ โโโ ยท
โง โโโโโโโโโโโ โโโ โโโโโโ โโโโโโโโโโโโโโโโโโโ โง
โโโโโโโโโโโ โโโ โโโโโ โโโโโโโโโโโโโโโโโโ
โฆ โฆ
Your complete AI development environment in one command.
Local models โข Cloud providers โข MCP servers โข Secrets โข Workflows โข Autonomous agents
Quick Start โข Why spn? โข Install โข Features โข Roadmap โข Contribute
# 1. Download a local model (100% private, no API keys)
spn model pull llama3.2:1b
# 2. Add an MCP server for knowledge graph access
spn mcp add neo4j
# 3. Check your AI environment status
spn statusThat's it. You now have:
- A running local LLM (via Ollama)
- A knowledge graph connection (via Neo4j MCP)
- Unified credential management (via OS Keychain)
- Ready for Claude, GPT, or any LLM provider
|
Building with AI today means juggling:
Result: Hours wasted on setup, zero time building. |
spn unifies your entire AI stack:
Result: 5 minutes to production, forever productive. |
๐บ Homebrew (Recommended for macOS/Linux)
brew install supernovae-st/tap/spn๐ฆ Cargo (Cross-platform)
cargo install spn-cli๐ณ Docker (Containerized)
# Run directly
docker run --rm ghcr.io/supernovae-st/spn:latest --version
# With project mount
docker run --rm -v $(pwd):/workspace ghcr.io/supernovae-st/spn:latest statusDocker images are ~5MB (scratch-based), support amd64/arm64, and include CA certificates.
๐ฆ Pre-built Binaries
Download from GitHub Releases:
- macOS (Apple Silicon + Intel)
- Linux (x86_64 + ARM64)
- Windows (coming soon)
spn --version # spn-cli 0.16.0
spn doctor # Health checkspn setupThe wizard will:
- Detect existing API keys and offer to migrate them
- Show you where to get new API keys (with links)
- Configure your preferred providers
- Set up MCP servers
- Sync to your editors (Claude Code, Cursor, Windsurf)
# Pull a model from Ollama registry
spn model pull llama3.2:1b
# Load it into memory
spn model load llama3.2:1b
# Check what's running
spn model status# Store API keys securely in OS Keychain
spn provider set anthropic
spn provider set openai
# Test them
spn provider test all# Add from 48 built-in aliases
spn mcp add neo4j # Knowledge graph
spn mcp add github # Code integration
spn mcp add perplexity # AI search
# Test connection
spn mcp test neo4jspn statusOutput:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ โฆ spn status The Agentic AI Toolkit โฆ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโ ๐ฆ LOCAL MODELS โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Ollama โ http://localhost:11434 โ
running โ
โ Memory 2.1 / 16.0 GB โโโโโโโโโโโโโโโโ 13% โ
โ Models โ
โ โโโ โ llama3.2:1b 1.2 GB โ active โ
โ โโโ โ mistral:7b 4.1 GB โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโ ๐ CREDENTIALS โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Name Type Status Source Endpoint โ
โ anthropic LLM โ
ready ๐ keychain api.anthropic.com โ
โ openai LLM โ
ready ๐ฆ env api.openai.com โ
โ ollama LLM โ
local ๐ฆ local localhost:11434 โ
โ neo4j MCP โ
ready ๐ keychain bolt://localhost:7687 โ
โ 7/13 configured โ ๐ 2 keychain ๐ฆ 4 env ๐ฆ 1 local โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโ ๐ MCP SERVERS โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Server Status Transport Command Credential โ
โ neo4j โ ready stdio uvx โ neo4j โ
โ perplexity โ ready stdio npx โ perplexity โ
โ 3/3 active โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
๐ 7/13 Keys ๐ 3 MCPs ๐ฆ 2 Models ๐ก Daemon OK
Run LLMs locally with zero API costs and 100% privacy.
spn model pull llama3.2:1b # Download from Ollama registry
spn model load llama3.2:1b # Load into GPU/RAM
spn model status # Check VRAM usage
spn model list # List installed modelsSupported: All Ollama models (70+ including Llama, Mistral, CodeLlama, Gemma)
Store API keys in your OS-native keychain with military-grade security.
spn provider set anthropic # Interactive prompt (hidden input)
spn provider list # Show all keys (masked)
spn provider migrate # Move .env โ keychain
spn provider test all # Validate all keysSecurity Stack:
- ๐ OS Keychain (macOS/Windows/Linux native)
- ๐ง Memory protection (
mlock,MADV_DONTDUMP) - ๐๏ธ Auto-zeroization (
Zeroizing<T>) - ๐ซ No debug/display exposure (
SecretString)
Supported Providers:
- LLM: Anthropic, OpenAI, Mistral, Groq, DeepSeek, Gemini, Ollama
- MCP: Neo4j, GitHub, Slack, Perplexity, Firecrawl, Supadata
Configure once, use everywhere. No per-editor setup.
spn mcp add neo4j # From 48 built-in aliases
spn mcp add github --global # User-level server
spn mcp list # Show all configured
spn mcp test neo4j # Verify connection
spn sync # Push to editorsBuilt-in Aliases (48):
- Database: neo4j, postgres, sqlite, supabase
- Dev Tools: github, gitlab, filesystem
- Search/AI: perplexity, brave-search, tavily
- Web: firecrawl, puppeteer, playwright
- Communication: slack, discord
One command to see your entire AI environment.
spn status # ASCII dashboard
spn status --json # Machine-readableShows:
- ๐ฆ Local models (installed, loaded, VRAM)
- ๐ Credentials (source, status, endpoint)
- ๐ MCP servers (status, transport, command)
- ๐ก Daemon (PID, socket, uptime)
Run autonomous AI agents that delegate tasks, reason, and learn.
spn jobs submit workflow.yaml # Submit background workflow
spn jobs list # Show running jobs
spn jobs logs <id> # Stream logs
spn suggest # Context-aware suggestionsFeatures:
- ๐ Background job scheduler
- ๐ง Cross-session memory
- ๐ค Multi-agent delegation
- ๐ฎ Autonomy orchestration
- ๐ก Proactive suggestions
Configuration that scales from solo dev to enterprise.
๐ Global (~/.spn/config.toml)
โ
๐ฅ Team (./mcp.yaml, committed to git)
โ
๐ป Local (./.spn/local.yaml, gitignored)
โ
โ๏ธ Resolved (Local > Team > Global)
spn config show # View resolved config
spn config get providers.anthropic.model
spn config set providers.anthropic.model claude-opus-4
spn config where # Show file locationsConfigure once, sync to all your editors.
spn sync # Sync to all enabled
spn sync --target claude-code # Sync to one
spn sync --interactive # Preview changes
spn sync --enable cursor # Enable auto-syncSupported Editors:
- Claude Code (
.claude/settings.json) - Cursor (
.cursor/mcp.json) - Windsurf (
.windsurf/mcp.json)
Turn any REST API into an MCP server.
spn mcp wrap --from-openapi swagger.json --output server.yaml
spn mcp add ./server.yamlFeatures:
- OpenAPI 3.0 parsing
- Rate limiting
- Authentication handling
- MCP Resources support
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ SPN EVOLUTION โ v0.15 to v0.18 (2026 Q1-Q2) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ โ
Phase A (v0.16.0) โ UNIFIED BACKEND REGISTRY โ
โ โข @models/ aliases in spn.yaml โ
โ โข Cloud providers as backends โ
โ โข Intent-based model selection โ
โ โข Backend orchestration system โ
โ โ
โ ๐ Phase B (v0.17.0) โ MULTIMODAL BACKENDS โ
โ โข Candle (HuggingFace models) โ
โ โข mistral.rs (vision models) โ
โ โข Image generation/analysis โ
โ โข Speech-to-text, text-to-speech โ
โ โ
โ ๐ง Phase C (v0.17.5) โ HARDWARE-AWARE RECOMMENDATIONS โ
โ โข llmfit-core integration โ
โ โข System resource detection โ
โ โข Model scoring based on hardware โ
โ โข Automatic fallback strategies โ
โ โ
โ ๐ค Phase D (v0.18.0) โ REASONING MODELS โ
โ โข OpenAI o1/o3 support โ
โ โข DeepSeek-R1 support โ
โ โข Reasoning trace capture โ
โ โข Anthropic extended thinking โ
โ โ
โ ๐ฎ Phase E (v0.18.5) โ AGENTIC CAPABILITIES โ
โ โข Nested agent spawning โ
โ โข Schema introspection โ
โ โข Dynamic task decomposition โ
โ โข Lazy context loading โ
โ โ
โ ๐ Phase F (v0.19.0) โ MCP AUTO-SYNC โ
โ โข File system monitoring โ
โ โข Foreign MCP detection โ
โ โข Desktop notifications โ
โ โข Automatic adoption/sync โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Current: v0.16.0 (Phase A in progress)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ 7-CRATE WORKSPACE โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ spn-core (0.2.0) Zero-dependency types, provider registry, validation โ
โ โ โ
โ spn-keyring (0.1.5) OS keychain wrapper, memory protection โ
โ โ โ
โ spn-client (0.3.4) Daemon SDK for external tools (Nika, IDE plugins) โ
โ โ โ
โ โโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โ โ
โ spn-cli (0.16.0) spn-mcp (0.1.5) โ
โ โข Main CLI binary โข REST-to-MCP wrapper โ
โ โข Daemon process โข OpenAPI parser โ
โ โข Job scheduler โข Rate limiting โ
โ โข Agent orchestration โข MCP Resources โ
โ โ
โ spn-providers (0.1.0) Cloud/local backend traits + implementations โ
โ spn-native (0.1.0) HuggingFace downloads + native inference (mistral.rs) โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Key Integrations:
- Nika (v0.21.1): Reads MCP configs directly from
~/.spn/mcp.yaml - NovaNet (v0.17.2): Uses spn-client for credential access
- Claude Code/Cursor/Windsurf: Synced via
spn sync
flowchart TB
subgraph SPN["๐ spn โ The Agentic AI Toolkit"]
CLI["๐ CLI Commands"]
DAEMON["๐ก Background Daemon"]
MODELS["๐ฆ Model Manager"]
SECRETS["๐ Secrets Vault"]
end
subgraph RUNTIME["๐ Runtime Engines"]
NIKA["๐ฆ Nika<br/>Workflow Engine"]
NOVANET["๐ง NovaNet<br/>Knowledge Graph"]
end
subgraph EXTERNAL["๐ External Services"]
OLLAMA["๐ฆ Ollama"]
CLAUDE["๐ค Claude"]
OPENAI["๐ค OpenAI"]
NEO4J["๐ Neo4j"]
end
CLI --> DAEMON
DAEMON --> MODELS
DAEMON --> SECRETS
NIKA -->|MCP Protocol| NOVANET
NIKA -->|Reads directly| SPN
MODELS --> OLLAMA
SECRETS --> CLAUDE
SECRETS --> OPENAI
NOVANET --> NEO4J
| Project | Description | Version |
|---|---|---|
| spn ๐ | The Agentic AI Toolkit | v0.16.0 |
| Nika ๐ฆ | YAML workflow engine (5 semantic verbs) | v0.21.1 |
| NovaNet ๐ง | Knowledge graph (Neo4j + MCP) | v0.17.2 |
Direct Integration: Nika reads
~/.spn/mcp.yamldirectly. No sync needed.
We welcome contributions! Here's how to get started.
# Clone the repository
git clone https://github.com/supernovae-st/supernovae-cli
cd supernovae-cli
# Build all crates
cargo build --workspace
# Run tests (1563+ passing)
cargo test --workspace
# Run linter (zero warnings)
cargo clippy --workspace -- -D warnings
# Format code
cargo fmt --workspace
# Install locally
cargo install --path crates/spntype(scope): description
feat(model): add hardware-aware model selection
fix(daemon): resolve race condition in job scheduler
docs(readme): update installation instructions
Types: feat, fix, docs, refactor, test, chore, perf, style
# Run all tests
cargo test --workspace
# Run with output
cargo test --workspace -- --nocapture
# Run specific test
cargo test test_config_resolution
# Run integration tests
cargo test --test integration- All tests passing
- Zero clippy warnings
- Code formatted (
cargo fmt) - Commit messages follow convention
- Documentation updated (if applicable)
Building the future of AI workflows
![]() Thibaut Melen Founder & Architect |
![]() Nicolas Cella Co-Founder & Engineer |
Claude AI Co-Author |
Nika Workflow Co-Author |
โญ Star us on GitHub โ it helps others discover SuperNovae!
MIT Licensed โข Made with ๐ and ๐ฆ by the SuperNovae team
Zero Clippy Warnings โข 1563+ Tests โข Automated Releases โข Open Source First

