Skip to content

Commit efb80e7

Browse files
authored
docs(ollama): add ollama to community sandboxes catalog and supported agents (#383)
* docs(ollama): add ollama to community sandboxes catalog and supported agents * update readme
1 parent 925160e commit efb80e7

5 files changed

Lines changed: 8 additions & 5 deletions

File tree

README.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ uv tool install -U openshell
3636
### Create a sandbox
3737

3838
```bash
39-
openshell sandbox create -- claude # or opencode, codex
39+
openshell sandbox create -- claude # or opencode, codex, ollama
4040
```
4141

4242
A gateway is created automatically on first use. To deploy on a remote host instead, pass `--remote user@host` to the create command.
@@ -137,6 +137,7 @@ The CLI auto-bootstraps a GPU-enabled gateway on first use. GPU intent is also i
137137
| [OpenCode](https://opencode.ai/) | [`base`](https://github.com/NVIDIA/OpenShell-Community/tree/main/sandboxes/base) | Works out of the box. Provider uses `OPENAI_API_KEY` or `OPENROUTER_API_KEY`. |
138138
| [Codex](https://developers.openai.com/codex) | [`base`](https://github.com/NVIDIA/OpenShell-Community/tree/main/sandboxes/base) | Works out of the box. Provider uses `OPENAI_API_KEY`. |
139139
| [OpenClaw](https://openclaw.ai/) | [Community](https://github.com/NVIDIA/OpenShell-Community) | Launch with `openshell sandbox create --from openclaw`. |
140+
| [Ollama](https://ollama.com/) | [Community](https://github.com/NVIDIA/OpenShell-Community) | Launch with `openshell sandbox create --from ollama`. |
140141

141142
## Key Commands
142143

docs/about/supported-agents.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@ The following table summarizes the agents that run in OpenShell sandboxes. All a
88
| [OpenCode](https://opencode.ai/) | [`base`](https://github.com/NVIDIA/OpenShell-Community/tree/main/sandboxes/base) | Partial coverage | Pre-installed. Add `opencode.ai` endpoint and OpenCode binary paths to the policy for full functionality. |
99
| [Codex](https://developers.openai.com/codex) | [`base`](https://github.com/NVIDIA/OpenShell-Community/tree/main/sandboxes/base) | No coverage | Pre-installed. Requires a custom policy with OpenAI endpoints and Codex binary paths. Requires `OPENAI_API_KEY`. |
1010
| [OpenClaw](https://openclaw.ai/) | [`openclaw`](https://github.com/NVIDIA/OpenShell-Community/tree/main/sandboxes/openclaw) | Bundled | Agent orchestration layer. Launch with `openshell sandbox create --from openclaw`. |
11+
| [Ollama](https://ollama.com/) | [`ollama`](https://github.com/NVIDIA/OpenShell-Community/tree/main/sandboxes/ollama) | Bundled | Run cloud and local models. Includes Claude Code, Codex, and OpenClaw. Launch with `openshell sandbox create --from ollama`. |
1112

1213
More community agent sandboxes are available in the {doc}`../sandboxes/community-sandboxes` catalog.
1314

docs/inference/configure.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -137,7 +137,7 @@ Use this endpoint when inference should stay local to the host for privacy and s
137137

138138
When the upstream runs on the same machine as the gateway, bind it to `0.0.0.0` and point the provider at `host.openshell.internal` or the host's LAN IP. `127.0.0.1` and `localhost` usually fail because the request originates from the gateway or sandbox runtime, not from your shell.
139139

140-
If the gateway runs on a remote host or behind a cloud deployment, `host.openshell.internal` points to that remote machine, not to your laptop. A laptop-local Ollama or vLLM process is not reachable from a remote gateway unless you add your own tunnel or shared network path.
140+
If the gateway runs on a remote host or behind a cloud deployment, `host.openshell.internal` points to that remote machine, not to your laptop. A locally running Ollama or vLLM process is not reachable from a remote gateway unless you add your own tunnel or shared network path. Ollama also supports cloud-hosted models that do not require local hardware.
141141

142142
### Verify the Endpoint from a Sandbox
143143

docs/sandboxes/community-sandboxes.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,7 @@ The following community sandboxes are available in the catalog.
4343
| Sandbox | Description |
4444
|---|---|
4545
| `base` | Foundational image with system tools and dev environment |
46+
| `ollama` | Ollama with cloud and local model support, Claude Code, Codex, and OpenClaw pre-installed |
4647
| `openclaw` | Open agent manipulation and control |
4748
| `sdg` | Synthetic data generation workflows |
4849

docs/tutorials/index.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -44,11 +44,11 @@ Launch Claude Code in a sandbox, diagnose a policy denial, and iterate on a cust
4444
{bdg-secondary}`Tutorial`
4545
:::
4646

47-
:::{grid-item-card} Local Inference with Ollama
47+
:::{grid-item-card} Inference with Ollama
4848
:link: local-inference-ollama
4949
:link-type: doc
5050

51-
Route inference to a local Ollama server, verify it from a sandbox, and reuse the same pattern for other OpenAI-compatible engines.
51+
Route inference through Ollama using cloud-hosted or local models, and verify it from a sandbox.
5252
+++
5353
{bdg-secondary}`Tutorial`
5454
:::
@@ -68,6 +68,6 @@ Route inference to a local LM Studio server via the OpenAI or Anthropic compatib
6868
6969
First Network Policy <first-network-policy>
7070
GitHub Push Access <github-sandbox>
71-
Local Inference with Ollama <local-inference-ollama>
71+
Inference with Ollama <local-inference-ollama>
7272
Local Inference with LM Studio <local-inference-lmstudio>
7373
```

0 commit comments

Comments
 (0)