Skip to content

fix(onboard): update sandbox openclaw.json with selected Ollama model (fixes #628)#630

Open
kagura-agent wants to merge 1 commit intoNVIDIA:mainfrom
kagura-agent:fix/628-local-ollama-configuration-not
Open

fix(onboard): update sandbox openclaw.json with selected Ollama model (fixes #628)#630
kagura-agent wants to merge 1 commit intoNVIDIA:mainfrom
kagura-agent:fix/628-local-ollama-configuration-not

Conversation

@kagura-agent
Copy link
Contributor

@kagura-agent kagura-agent commented Mar 22, 2026

Problem

When users select Local Ollama during onboarding, the sandbox's ~/.openclaw/openclaw.json still shows the default cloud model (inference/nvidia/nemotron-3-super-120b-a12b) instead of the selected Ollama model (e.g., nemotron-3-nano:30b).

This happens because the Dockerfile bakes openclaw.json at build time using ARG NEMOCLAW_MODEL (default: nvidia/nemotron-3-super-120b-a12b), and the sandbox is built before the user selects their inference provider/model.

The gateway correctly routes requests to Ollama (via openshell inference set), but the agent reads openclaw.json and reports the wrong model identity.

Root Cause

The onboarding flow was ordered:

  1. Step 3: Build sandbox (bakes openclaw.json with default cloud model)
  2. Step 4: User selects inference provider/model

So openclaw.json always contained the cloud default, regardless of what the user chose.

Fix

Reorder the onboarding flow so inference selection happens before sandbox creation:

  1. Step 3: User selects inference provider/model (renamed setupNim()selectInference())
  2. Step 4: Build sandbox — patches the copied Dockerfile's ARG NEMOCLAW_MODEL default to the selected model before building

This preserves the immutable config design from PR #588openclaw.json remains root:root 444 and is never modified at runtime. The fix operates entirely at build time by patching the Dockerfile's build arg.

Changes

File Change
bin/lib/onboard.js Reorder flow: selectInference() before createSandbox()
bin/lib/onboard.js Add patchDockerfileModel() helper
bin/lib/onboard.js Rename setupNim()selectInference() (no sandbox dependency)
bin/lib/onboard.js Defer NIM container start to after sandbox creation
test/onboard.test.js Add tests for patchDockerfileModel()
test/onboard-selection.test.js Update to use renamed selectInference()

Testing

  • All existing tests pass (195/196 — 1 pre-existing failure in install-preflight.test.js)
  • New tests verify Dockerfile patching with correct and absent ARG lines

Fixes #628

Summary by CodeRabbit

  • New Features

    • Model selection is now integrated earlier in the setup process; selected models are pre-configured into the Docker image at build time.
  • Refactor

    • Improved inference provider setup workflow and container startup process with enhanced health checking.

…fixes NVIDIA#628)

When users select Local Ollama during onboarding, the sandbox's
openclaw.json still showed the default cloud model because the
Dockerfile was built before the model was selected.

Fix: reorder the onboarding flow so inference provider/model
selection (Step 3) happens before sandbox creation (Step 4).
The selected model is then patched into the Dockerfile's
ARG NEMOCLAW_MODEL default before the image build, ensuring
openclaw.json is baked with the correct model at build time.

This preserves the immutable config design from PR NVIDIA#588 —
openclaw.json is still root:root 444 and never modified at
runtime. The fix operates entirely at build time.

Changes:
- Rename setupNim() → selectInference() (no sandbox dependency)
- Add patchDockerfileModel() to rewrite ARG NEMOCLAW_MODEL
- Defer NIM container start to after sandbox creation
- Move registry.updateSandbox() to main onboard() flow
- Add tests for Dockerfile patching
@coderabbitai
Copy link

coderabbitai bot commented Mar 22, 2026

📝 Walkthrough

Walkthrough

This PR refactors the onboarding flow to ensure user-selected inference models are baked into the sandbox image at build time. It introduces patchDockerfileModel() to rewrite the Dockerfile's model argument, reorders inference selection before sandbox creation, and separates NIM container startup into a dedicated function with updated module exports.

Changes

Cohort / File(s) Summary
Onboarding Logic Refactoring
bin/lib/onboard.js
Added patchDockerfileModel() to rewrite ARG NEMOCLAW_MODEL in Dockerfile, refactored onboard() to call selectInference() before createSandbox() and pass the selected model, introduced startNim() function for post-sandbox NIM container startup, updated module exports (removed setupNim; added patchDockerfileModel and selectInference).
Test Updates
test/onboard-selection.test.js
Changed import from setupNim to selectInference and updated test invocation to match new function signature.
Dockerfile Patching Tests
test/onboard.test.js
Added test cases verifying patchDockerfileModel() correctly updates ARG NEMOCLAW_MODEL in Dockerfile while preserving unrelated arguments, and handles missing ARG lines gracefully.

Sequence Diagram

sequenceDiagram
    participant User
    participant Onboard as Onboard Flow
    participant Inference as selectInference()
    participant Sandbox as createSandbox()
    participant Patcher as patchDockerfileModel()
    participant NIM as startNim()

    User->>Onboard: onboard(gpu)
    Onboard->>Inference: selectInference(gpu)
    Inference-->>Onboard: {provider, model}
    Onboard->>Sandbox: createSandbox(gpu, model)
    Sandbox->>Patcher: patchDockerfileModel(path, model)
    Patcher-->>Sandbox: Dockerfile updated
    Sandbox-->>Onboard: sandboxName
    Onboard->>NIM: startNim(sandboxName, model)
    NIM-->>Onboard: Container ready
    Onboard-->>User: Sandbox configured
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐰 Hoppy changes, I declare!
Models selected with utmost care,
Dockerfiles patched before the build,
Sandbox images cleverly filled.
NIM starts when the time is right,
Local Ollama, shining bright! 🌟

🚥 Pre-merge checks | ✅ 3 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Linked Issues check ⚠️ Warning Code changes pass the Dockerfile model patching and inference selection reordering to fix sandbox openclaw.json reflecting selected model, but do not address non-interactive provider fallback logic or network warnings for Ollama. Address non-interactive NEMOCLAW_PROVIDER=ollama fallback logic and add Ollama network warnings (127.0.0.1 vs host binding) to fully satisfy issue #628 acceptance criteria.
Docstring Coverage ⚠️ Warning Docstring coverage is 77.78% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'update sandbox openclaw.json with selected Ollama model' accurately describes the main fix—reordering onboarding to use selected model when building the sandbox.
Out of Scope Changes check ✅ Passed All changes are in scope: Dockerfile patching, inference selection reordering, NIM container startup deferral, test updates align with fixing sandbox openclaw.json model configuration.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

CodeRabbit can use oxc to improve the quality of JavaScript and TypeScript code reviews.

Add a configuration file to your project to customize how CodeRabbit runs oxc.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
bin/lib/onboard.js (1)

298-314: Potential injection risk in patchDockerfileModel if model contains regex special characters.

The model parameter is interpolated directly into the replacement string. While isSafeModelId() validates the model against /^[A-Za-z0-9._:/-]+$/, none of these characters are problematic for the regex replacement. However, the function relies on validation happening upstream.

Consider adding a defensive check or assertion:

🛡️ Optional: Add defensive validation
 function patchDockerfileModel(dockerfilePath, model) {
+  if (!isSafeModelId(model)) {
+    throw new Error(`Invalid model identifier: ${model}`);
+  }
   const content = fs.readFileSync(dockerfilePath, "utf8");
   const patched = content.replace(
     /^ARG NEMOCLAW_MODEL=.+$/m,
     `ARG NEMOCLAW_MODEL=${model}`,
   );
   fs.writeFileSync(dockerfilePath, patched);
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bin/lib/onboard.js` around lines 298 - 314, Add a defensive validation in
patchDockerfileModel: before reading/writing, call the existing
isSafeModelId(model) (or equivalent validator) and throw or exit if it returns
false so unvalidated input never reaches the regex replacement; this prevents
accidental injection or special-replacement sequences (e.g. $) from influencing
the replacement. Keep the rest of patchDockerfileModel intact (read file,
replace /^ARG NEMOCLAW_MODEL=.+$/m, write file) but fail fast if
isSafeModelId(model) is false to ensure only validated model IDs are used.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@bin/lib/onboard.js`:
- Around line 298-314: Add a defensive validation in patchDockerfileModel:
before reading/writing, call the existing isSafeModelId(model) (or equivalent
validator) and throw or exit if it returns false so unvalidated input never
reaches the regex replacement; this prevents accidental injection or
special-replacement sequences (e.g. $) from influencing the replacement. Keep
the rest of patchDockerfileModel intact (read file, replace /^ARG
NEMOCLAW_MODEL=.+$/m, write file) but fail fast if isSafeModelId(model) is false
to ensure only validated model IDs are used.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 431c6c0c-d278-4c60-9b3a-8804c8cd782c

📥 Commits

Reviewing files that changed from the base of the PR and between 04012f7 and 17886a7.

📒 Files selected for processing (3)
  • bin/lib/onboard.js
  • test/onboard-selection.test.js
  • test/onboard.test.js

@andy-ratsirarson
Copy link

The gateway is actually swapping the correct LLM during the flight request, the default in openclaw.json is being ignored.

@kagura-agent
Copy link
Contributor Author

Thanks for the context @andy-ratsirarson! That's an important distinction.

So the actual behavior is:

  • Gateway correctly uses the selected Ollama model at runtime ✅
  • openclaw.json inside the sandbox still shows the stale default ❌ (cosmetic)

This PR fixes the cosmetic issue — after onboarding, openclaw.json will reflect the actual model being used. This matters for:

  1. Users inspecting their config (seeing nemotron-3-super-120b-a12b when they selected a local model is confusing)
  2. Any code that reads openclaw.json directly instead of going through the gateway

Does the gateway always override openclaw.json, or are there code paths that read the config file directly? If the former, this is purely cosmetic; if the latter, it could cause real issues.

@wscurran wscurran added bug Something isn't working OpenClaw Plugin priority: high Important issue that should be resolved in the next release labels Mar 23, 2026
@wscurran
Copy link
Contributor

Thanks for submitting this proposed fix to update the openclaw.json file with the selected Ollama model, which may help ensure that the correct model is used in the sandbox.

@wscurran wscurran added Getting Started Use this label to identify setup, installation, or onboarding issues. Integration: OpenClaw Support for OpenClaw labels Mar 23, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working Getting Started Use this label to identify setup, installation, or onboarding issues. Integration: OpenClaw Support for OpenClaw OpenClaw Plugin priority: high Important issue that should be resolved in the next release

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Local Ollama configuration not reflected in sandbox openclaw.json (shows cloud nemotron default)

3 participants