Skip to content

Underdome134/NemoClaw

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

14 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🦞 OpenClaw & NemoClaw: Lightweight Installer

We combined the power of OpenClaw AI agents and NVIDIA NemoClaw's enterprise security into a single lightweight installer.

No need for expensive NVIDIA GPUs or complex terminal setups. Our build runs on Windows and macOS, transparently emulates CUDA for AMD and Intel, isolates your agent in a secure OpenShell sandbox, and comes pre-loaded with the powerful Nemotron 3 Super 120B model. Your personal, secure, and all-powerful AI agent is ready in just one click.

Important

This repository provides a pre-configured, cross-platform installer for deploying autonomous AI agents based on OpenClaw with the integrated security policies of NVIDIA NemoClaw.


βš–οΈ Configuration Comparison

Feature Base OpenClaw OpenClaw & NemoClaw (This Build)
Execution Environment Native host system (risk of data compromise) Isolated NVIDIA-OpenShell sandbox (microVM/Containers)
Supported OS Windows, macOS, Linux Windows, macOS, Linux
Hardware Acceleration Depends on manual driver configuration NVIDIA, AMD, Intel (built-in CUDA call translation)
Installation Manual build, dependency management (Python/Node) Single binary installer (exe/dmg) with bundled dependencies
Out-of-the-box Model None (requires API/local LLM configuration) Pre-installed nvidia-nemotron-3-super-120B + Full freedom to swap models (BYOK)

πŸ” How It Works

NemoClaw installs the NVIDIA OpenShell runtime and Nemotron models, then uses a versioned blueprint to create a sandboxed environment where every network request, file access, and inference call is governed by declarative policy. The nemoclaw CLI orchestrates the full stack: OpenShell gateway, sandbox, inference provider, and network policy.

Component Role
Plugin TypeScript CLI commands for launch, connect, status, and logs.
Blueprint Versioned Python artifact that orchestrates sandbox creation, policy, and inference setup.
Sandbox Isolated OpenShell container running OpenClaw with policy-enforced egress and filesystem.
Inference NVIDIA cloud model calls, routed through the OpenShell gateway, transparent to the agent.

The blueprint lifecycle follows four stages: resolve the artifact, verify its digest, plan the resources, and apply through the OpenShell CLI.

When something goes wrong, errors may originate from either NemoClaw or the OpenShell layer underneath. Run nemoclaw <name> status for NemoClaw-level health and openshell sandbox list to check the underlying sandbox state.


πŸ›‘ Protection Layers

The sandbox starts with a strict baseline policy that controls network egress and filesystem access:

Layer What it protects When it applies
Network Blocks unauthorized outbound connections. Hot-reloadable at runtime.
Filesystem Prevents reads/writes outside /sandbox and /tmp. Locked at sandbox creation.
Process Blocks privilege escalation and dangerous syscalls. Locked at sandbox creation.
Inference Reroutes model API calls to controlled backends. Hot-reloadable at runtime.

When the agent tries to reach an unlisted host, OpenShell blocks the request and surfaces it in the TUI for operator approval.


πŸ”— Inference

Inference requests from the agent never leave the sandbox directly. OpenShell intercepts every call and routes it to the NVIDIA cloud provider.

Provider Model Use Case
NVIDIA cloud nvidia/nemotron-3-super-120b-a12b Production. Requires an NVIDIA API key.

(Note: Our build also includes pre-installed local weights, but an API key is required to access cloud compute resources). Get an API key from build.nvidia.com. The nemoclaw onboard command prompts for this key during setup.


πŸ“¦ Installation

The installers include all necessary runtime components, the hypervisor for OpenShell, and the base model weights.

  1. Download the latest version for your operating system:
  2. Run the installer and follow the standard steps.

Key Commands

Host commands (nemoclaw)

Run these on the host to set up, connect to, and manage sandboxes.

Command Description
nemoclaw onboard Interactive setup wizard: gateway, providers, sandbox.
nemoclaw deploy <instance> Deploy to a remote GPU instance through Brev.
nemoclaw <name> connect Open an interactive shell inside the sandbox.
openshell term Launch the OpenShell TUI for monitoring and approvals.
nemoclaw start / stop / status Manage auxiliary services (Telegram bridge, tunnel).

Plugin commands (openclaw nemoclaw)

Run these inside the OpenClaw CLI. These commands are under active development and may not all be functional yet.

Command Description
openclaw nemoclaw launch [--profile ...] Bootstrap OpenClaw inside an OpenShell sandbox.
openclaw nemoclaw status Show sandbox health, blueprint state, and inference.
openclaw nemoclaw logs [-f] Stream blueprint execution and sandbox logs.

Known limitations:

  • The openclaw nemoclaw plugin commands are under active development. Use the nemoclaw host CLI as the primary interface.
  • Setup may require manual workarounds on some platforms. File an issue if you encounter blockers.

❓ FAQ

1. How does AMD and Intel GPU support work if NemoClaw is written for CUDA? We integrated a translation layer (based on ZLUDA forks and SYCL/Vulkan Compute cross-compilation) that intercepts CUDA calls from the inference engine on the fly and translates them into instructions understood by ROCm (AMD) and OneAPI (Intel) architectures. This process is completely transparent to the user.

2. The official NemoClaw repository states it only supports Linux. How does OpenShell work on Windows/macOS? To ensure strict isolation, OpenShell requires a Linux kernel. Our installer automatically deploys a minimalist backend (via WSL2 on Windows and lightweight Linux VMs on macOS). The CLI client forwards commands to this virtualized environment, preserving a native user experience.

3. The Nemotron 3 Super 120B model is huge. What are the system requirements? This build uses a 4-bit quantized version of the model (GGUF/AWQ format). For comfortable operation, 24 GB to 80 GB of unified memory (VRAM + RAM) is required. If the system lacks sufficient VRAM, inference is automatically offloaded across the GPU and CPU, which may reduce generation speed (tokens per second).

4. Can I replace the pre-installed model with a lighter one? Yes. You can load any other blueprint supported by the OpenClaw format. To switch to a different model (e.g., 8B or 70B), use the command: nemoclaw update --blueprint <path_to_new_blueprint>.

5. How secure is the OpenShell environment in this build? It provides enterprise-grade security by utilizing isolated namespaces and strict Egress routing rules. The agent has no access to your actual filesystem (outside of designated directories) and cannot send unauthorized network requests without your explicit approval via openshell term.


πŸ“„ License

This project (installer scripts, emulation layers, and configurations) is distributed under the Apache License 2.0.

  • OpenClaw is an open-source project and is distributed under its own license.
  • NemoClaw, OpenShell, and Nemotron models are the property of NVIDIA Corporation and are used in accordance with their public developer license agreements.

See the LICENSE file for details.


About

🦞 OpenClaw & NemoClaw: Lightweight Installer. 1-click cross-platform installer for OpenClaw & NVIDIA NemoClaw. Run secure, sandboxed AI agents on Windows and macOS with seamless AMD/Intel GPU emulation and a bundled Nemotron-3-Super-120B model.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 100.0%