Neel Shah
All posts
AI Tools 18 min read March 19, 2026

NemoClaw by NVIDIA: A Practical Guide for OpenClaw Developers

A hands-on walkthrough of NemoClaw — NVIDIA's sandboxed AI agent runtime. Fresh install, local Ollama inference, 5 game-changing features, and every error explained simply.

NemoClawNVIDIAOpenClawOllamaAI AgentsSecurityUbuntu
Neel Shah
Tech Lead · Senior Data Engineer · Ottawa

You already run OpenClaw. You have Ollama on your machine. Your AI agent replies on WhatsApp, it reads files, it runs commands. It works.

But here is a question worth sitting with: what stops your agent from doing something it should not?

Not you telling it not to. Not a system prompt. An actual hard wall at the operating system level — one that blocks the network call before it happens, refuses the file read before the kernel even sees it, and routes every inference call through a controlled path.

That is what NemoClaw is. Released by NVIDIA as open source, it wraps your OpenClaw agent inside a secure sandbox called OpenShell. The agent still does everything it normally does. It just cannot escape the box.

This guide walks through the full setup from zero, explains what each piece actually does, and covers the five features that will change how you think about running agents.


Before We Start: What You Need

Let us be honest about the requirements before you spend time installing anything.

Minimum to run NemoClaw
─────────────────────────────────────────
  CPU      4 vCPU
  RAM      8 GB   (16 GB recommended)
  Disk     20 GB free  (40 GB recommended)
  OS       Ubuntu 22.04 LTS or later
  Node.js  v20 or later
  Docker   installed and running

The sandbox image alone is about 2.4 GB compressed. Docker, the OpenShell gateway, and the agent all load on top. On a machine with less than 8 GB RAM you will hit out-of-memory errors. If that is your situation, the workaround is adding 8 GB of swap — covered in the Troubleshooting section at the end.

If you already use OpenClaw with Ollama (which you likely do if you are reading this), you have most of this already. The only new requirement is Docker.

Check Docker:

docker --version
docker ps

If the second command fails with a permissions error, add yourself to the docker group:

sudo usermod -aG docker $USER
newgrp docker

How NemoClaw Fits Into What You Already Have

Before installing anything, it helps to understand where NemoClaw sits in relation to OpenClaw and Ollama.

Without NemoClaw (plain OpenClaw)
─────────────────────────────────────────────────────
  WhatsApp / TUI


  OpenClaw Gateway  ◄──────────────────────────────────
       │                                              │
       ▼                                        Ollama (models)
  Your agent runs directly on your machine
  - Can read any file your user can read
  - Can make any network request
  - Can run any command you approve
With NemoClaw
─────────────────────────────────────────────────────
  WhatsApp / TUI


  OpenClaw Gateway


  ┌─────────────────────────────────────┐
  │  OpenShell Sandbox                  │
  │                                     │
  │   Your agent runs here              │
  │   - Files: /sandbox and /tmp only   │
  │   - Network: allowlist only         │
  │   - Processes: no privilege escalation │
  │   - Inference: routed via OpenShell │
  │                                     │
  └─────────────────────────────────────┘


  Ollama (on your host machine, reachable via policy)

The agent behaves identically from the outside. The difference is entirely about what it cannot do, even if it tries.


The Four Building Blocks

NemoClaw is made of four parts. You do not need to fully understand them to use it, but knowing their names helps a lot when something goes wrong.

┌──────────────────────────────────────────────────────────┐
│                      NemoClaw Stack                      │
│                                                          │
│  ┌─────────────┐   ┌──────────────┐   ┌──────────────┐  │
│  │   Plugin    │   │  Blueprint   │   │   Sandbox    │  │
│  │             │   │              │   │              │  │
│  │ TypeScript  │──►│ Python       │──►│ OpenShell    │  │
│  │ CLI on host │   │ artifact     │   │ container    │  │
│  │             │   │ (versioned)  │   │              │  │
│  └─────────────┘   └──────────────┘   └──────────────┘  │
│                                              │           │
│                                    ┌─────────▼────────┐  │
│                                    │   Inference      │  │
│                                    │   Router         │  │
│                                    │   (Ollama/NVIDIA)│  │
│                                    └──────────────────┘  │
└──────────────────────────────────────────────────────────┘
PartWhat it isWhat it does
PluginTypeScript CLI on your hostThe commands you type: nemoclaw start, nemoclaw connect
BlueprintVersioned Python artifactA recipe that creates the sandbox — pinned and verified
SandboxDocker container + OpenShellWhere your agent actually lives and runs
Inference RouterNetwork layer inside OpenShellIntercepts all model API calls and routes them to Ollama or NVIDIA Cloud

Installation

Step 1 — Run the installer

curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash

This does three things automatically:

  1. Checks for Node.js v20+ and installs it if missing
  2. Installs the nemoclaw CLI
  3. Launches the interactive onboarding wizard

The wizard will ask you several questions. Here is what to expect and what to answer for a local Ollama setup:

? Gateway port                    → press Enter (default 18789)
? Inference provider              → select "Ollama (local)"
? Ollama base URL                 → http://127.0.0.1:11434
? Model to use                    → qwen3:8b  (or any model you have)
? Sandbox name                    → my-assistant  (or anything you like)
? Enable network policy           → Yes
? Enable filesystem policy        → Yes

Step 2 — Fix PATH if needed

If you use nvm or fnm to manage Node.js versions, the installer may not find them automatically. Run:

source ~/.bashrc

Then check the install worked:

nemoclaw --version

Step 3 — Verify the sandbox started

When onboarding completes, you will see a summary like this:

Sandbox my-assistant (Landlock + seccomp + netns)
Model ollama/qwen3:8b

Run:    nemoclaw my-assistant connect
Status: nemoclaw my-assistant status
Logs:   nemoclaw my-assistant logs --follow

Check it is healthy:

nemoclaw my-assistant status

A healthy output looks like:

● my-assistant
  Sandbox    running
  Blueprint  v0.4.1 (verified)
  Inference  ollama/qwen3:8b  ← connected
  Network    policy active (4 rules)
  Uptime     2m 34s

If you see inference: disconnected, Ollama is not reachable from inside the sandbox. Jump to the Troubleshooting section.


Connecting to Your Agent

Once the sandbox is running, you connect to it and talk to the agent the same way you always have:

# Open an interactive shell inside the sandbox
nemoclaw my-assistant connect

# You are now inside the sandbox — notice the prompt change
sandbox@my-assistant:~$

# Start the TUI to chat
openclaw tui

# Or send a single message via CLI (better for long responses)
openclaw agent --agent main --local -m "summarise my last 5 tasks" --session-id s1

To exit the sandbox shell, just type exit.


The 5 Game-Changing Features

Feature 1: The Four-Layer Security Wall

This is the core of NemoClaw and the reason it exists. Every agent running inside the sandbox is governed by four independent security boundaries that stack on top of each other.

Request from the agent


┌───────────────────┐
│   1. NETWORK      │  Is this host on the allowlist?
│   (netns)         │  No → BLOCKED immediately
└────────┬──────────┘
         │ Yes

┌───────────────────┐
│   2. FILESYSTEM   │  Is this path inside /sandbox or /tmp?
│   (Landlock)      │  No → BLOCKED immediately
└────────┬──────────┘
         │ Yes

┌───────────────────┐
│   3. PROCESS      │  Is this a safe syscall?
│   (seccomp)       │  No → BLOCKED immediately
└────────┬──────────┘
         │ Yes

┌───────────────────┐
│   4. INFERENCE    │  Is the model request going through
│   (OpenShell GW)  │  the controlled router?
└────────┬──────────┘
         │ Yes

     Request allowed

Why this matters: Each layer is independent. If the network layer is misconfigured, filesystem and process controls still work. If a model somehow generates a malicious command, the process layer blocks it. You are not relying on one lock — you have four.

Layers 1 and 4 are hot-reloadable. You can update your network allowlist or swap the inference model without restarting the sandbox. Layers 2 and 3 (filesystem and process) are locked at creation time and cannot change without a rebuild.


Feature 2: Network Policy With Live Approval

Plain OpenClaw agents can make any outbound network request your machine permits. If you ask your agent to “check the weather”, it will call a weather API. Fine. But what if someone crafts a message that tricks it into calling something else?

NemoClaw blocks all outbound network by default. The agent can only reach hosts you have explicitly listed in the policy file.

Here is what a policy file looks like:

# ~/.nemoclaw/my-assistant/network-policy.yaml

egress:
  - host: "127.0.0.1"       # Ollama (local)
    port: 11434
  - host: "api.github.com"  # GitHub API
    port: 443
  - host: "pypi.org"        # Python packages
    port: 443

When the agent tries to reach a host that is not in the list, OpenShell blocks it instantly. The blocked attempt surfaces in the TUI:

⚠ Blocked outbound request
  From:  openclaw-agent
  To:    suspicious-host.com:443
  Time:  11:42:03

  [Allow once]  [Add to policy]  [Deny always]

You see it, you decide. The policy is hot-reloadable — edit the file, save it, and the new rules apply within seconds without restarting the sandbox:

# Edit the policy
nano ~/.nemoclaw/my-assistant/network-policy.yaml

# Reload without restart
nemoclaw my-assistant reload-policy

Why this is a game changer: Agents running autonomously for long periods — like an always-on WhatsApp assistant — are exactly the kind of workload where unexpected network calls can go unnoticed for days. NemoClaw makes every outbound connection visible and optional.


Feature 3: Filesystem Isolation

Inside the sandbox, the agent can only see two directories:

Sandbox filesystem view
─────────────────────────────────────
  /sandbox/     ← agent's workspace
  /tmp/         ← temporary files

Everything else → Permission denied
─────────────────────────────────────
  /home/neel/   ← BLOCKED
  /etc/         ← BLOCKED
  /var/         ← BLOCKED
  /root/        ← BLOCKED

This is enforced by Landlock, a Linux kernel security module (available since kernel 5.13). It is not Docker volume mounts — it is kernel-level enforcement that cannot be bypassed from inside the container.

To give the agent access to a specific file or directory, you mount it explicitly at sandbox creation:

nemoclaw onboard --mount /home/neel/projects/my-project:/sandbox/project

Inside the sandbox, the agent sees /sandbox/project. It has no idea /home/neel/projects exists.

Why this matters: If your agent processes documents, handles uploads, or reads user-provided files — filesystem isolation means a malicious filename or path traversal attack cannot touch anything outside the sandbox.


Feature 4: Local Inference With Ollama

NemoClaw’s default inference provider is NVIDIA Cloud, which requires an API key and sends your data externally. But for a self-hosted setup where privacy matters, you can route all inference through your local Ollama instance instead.

Here is how the routing works:

Without NemoClaw (plain OpenClaw)
──────────────────────────────────
  Agent  →  Ollama  →  qwen3:8b
  (direct call, no interception)


With NemoClaw + local Ollama
──────────────────────────────────
  Agent  →  OpenShell Inference Router  →  Ollama  →  qwen3:8b
             (intercepted, logged,           (on host,
              policy-checked)                port 11434)

The agent does not know the difference. It makes the same API call it always did. OpenShell intercepts it, checks it against the inference policy, logs it, and forwards it to Ollama.

To configure local Ollama inference during onboarding:

? Inference provider  →  Ollama (local)
? Ollama base URL     →  http://host.docker.internal:11434

Note: From inside a Docker container, localhost refers to the container, not your host machine. Use host.docker.internal on Linux (Docker 20.10+) or add --add-host manually.

To switch models without touching the sandbox:

# Pull the new model on your host first
ollama pull qwen3:8b

# Update the inference config (hot-reloadable)
nemoclaw my-assistant set-model ollama/qwen3:8b
nemoclaw my-assistant reload-inference

Why this matters: Every inference call is now logged, auditable, and policy-governed. You can see exactly what prompts your agent sent, what it received, and how long each call took — without sending anything outside your machine.


Feature 5: Versioned Blueprints

Every sandbox is created from a Blueprint — a versioned, cryptographically verified Python artifact that describes exactly what the sandbox contains. Think of it like a lockfile for your agent’s entire runtime.

Blueprint lifecycle
──────────────────────────────────────────────────
  Step 1: Resolve    →  Download blueprint v0.4.1
  Step 2: Verify     →  Check SHA256 digest
  Step 3: Plan       →  What resources will be created?
  Step 4: Apply      →  Create the sandbox

This means two things:

Reproducibility. If you rebuild your sandbox on a different machine, or six months from now, you get the exact same environment. No “it worked yesterday” mysteries.

Auditability. You can inspect exactly what a blueprint contains before applying it:

nemoclaw blueprint inspect v0.4.1
Blueprint v0.4.1
  Digest    sha256:a3de86cd1c...
  Published 2026-03-14
  OpenShell 2.1.0
  Changes:
    - Network policy hot-reload now applies within 500ms
    - Fixed: seccomp filter was blocking harmless inotify calls
    - New: inference request logging to /sandbox/.openclaw/inference.log

To upgrade your sandbox to a newer blueprint:

# See what is available
nemoclaw blueprint list

# Upgrade (rebuilds the sandbox, your /sandbox files are preserved)
nemoclaw my-assistant upgrade --blueprint v0.5.0

Why this matters: Most agent setups are a collection of shell commands no one fully remembers. Blueprints give you a single versioned artifact that captures everything — and lets you roll back if an upgrade breaks something.


Day-to-Day Usage Reference

Once everything is running, these are the commands you will use regularly:

Start / stop
────────────────────────────────────
  nemoclaw my-assistant start        start the sandbox
  nemoclaw my-assistant stop         stop it cleanly
  nemoclaw my-assistant restart      restart and confirm health

Check what is happening
────────────────────────────────────
  nemoclaw my-assistant status       full health overview
  nemoclaw my-assistant logs         last 100 log lines
  nemoclaw my-assistant logs -f      live log stream

Talk to the agent
────────────────────────────────────
  nemoclaw my-assistant connect      open a shell inside the sandbox
  openclaw tui                       interactive chat (run inside sandbox)
  openclaw agent --local -m "hello"  single message (run inside sandbox)

Manage models
────────────────────────────────────
  ollama list                        see models on your host
  ollama pull qwen3:8b               download a new model
  nemoclaw my-assistant set-model ollama/qwen3:8b
  nemoclaw my-assistant reload-inference

Manage network policy
────────────────────────────────────
  nano ~/.nemoclaw/my-assistant/network-policy.yaml
  nemoclaw my-assistant reload-policy

Monitor in real time
────────────────────────────────────
  openshell term                     full TUI with blocked request alerts

Troubleshooting

These are the errors most developers hit in the first hour.


Error: inference: disconnected

What it means: The sandbox cannot reach Ollama.

Why it happens: Inside Docker, localhost or 127.0.0.1 points to the container, not your host machine.

Fix: Update the Ollama URL in the inference config to use the Docker host address:

nemoclaw my-assistant set-inference-url http://host.docker.internal:11434
nemoclaw my-assistant reload-inference

On older Docker versions on Linux, host.docker.internal may not work. Use the Docker bridge IP instead:

ip route show | grep docker
# → 172.17.0.0/16 dev docker0
# Use 172.17.0.1 as your host address
nemoclaw my-assistant set-inference-url http://172.17.0.1:11434

Error: OOMKilled or sandbox crashes on start

What it means: The machine ran out of memory while starting the sandbox.

Fix: Add swap space:

sudo fallocate -l 8G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile

# Make it permanent
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab

Then try starting the sandbox again:

nemoclaw my-assistant start

Error: Blueprint digest mismatch

What it means: The downloaded blueprint does not match the expected hash. This could mean a corrupted download or a genuine integrity issue.

Fix: Clear the blueprint cache and re-download:

nemoclaw blueprint cache clear
nemoclaw my-assistant rebuild

If it fails again, do not proceed. Check the NVIDIA/NemoClaw GitHub issues page.


Error: PATH issue after install (commands not found)

What it means: The installer added nemoclaw to your PATH but your current shell session does not know yet.

Fix:

source ~/.bashrc   # or ~/.zshrc if you use zsh

Error: network policy blocked Ollama

What it means: Your network policy is too strict and is blocking the inference call to Ollama.

Fix: Make sure 127.0.0.1:11434 (or your Docker host address) is in the policy file:

egress:
  - host: "host.docker.internal"
    port: 11434

Then reload:

nemoclaw my-assistant reload-policy

NemoClaw vs Plain OpenClaw: Quick Comparison

Feature                     OpenClaw        NemoClaw
──────────────────────────────────────────────────────────
Network control             None            Allowlist + live approval
Filesystem access           Full user       /sandbox and /tmp only
Process restrictions        None            seccomp filter
Inference logging           No              Yes (full audit log)
Reproducible environment    No              Versioned blueprints
Rollback on bad upgrade     No              Blueprint version pinning
Setup complexity            Simple          Moderate (Docker required)
Memory overhead             Low             +2.4 GB for sandbox image

When to Use NemoClaw vs Plain OpenClaw

Use plain OpenClaw when:
  ✓ Personal use only, fully trusted input
  ✓ You are prototyping or experimenting
  ✓ Your machine has less than 8 GB RAM
  ✓ You do not have Docker installed

Use NemoClaw when:
  ✓ The agent handles input from other people (WhatsApp groups, etc.)
  ✓ The agent runs unattended for long periods
  ✓ You want a full audit trail of every inference call
  ✓ You are running anything that touches real data or systems
  ✓ You want reproducible agent environments across machines

What Is Alpha Status and What It Means for You

NemoClaw is labelled alpha. In practice this means:

  • The openclaw nemoclaw plugin commands are still experimental — use the nemoclaw host CLI instead
  • Interfaces may change between versions (check the changelog before upgrading)
  • Ollama inference is experimental (NVIDIA Cloud is the stable path)
  • There will be rough edges — the GitHub issues page is active and responsive

For a personal always-on assistant, alpha is fine. For anything production or customer-facing, wait for a stable release or pin to a specific blueprint version and do not auto-upgrade.


Summary

NemoClaw takes the OpenClaw agent you already have and adds a hard security boundary around it. Not a best-effort soft limit — actual kernel-level enforcement that the agent cannot bypass.

The five features worth remembering:

FeatureThe short version
Four-layer security wallNetwork, filesystem, process, and inference — all independent
Live network approvalSee blocked requests in real time, approve or deny
Filesystem isolationAgent sees only /sandbox and /tmp — nothing else exists
Local Ollama inferenceAll model calls stay on your machine, fully logged
Versioned blueprintsReproducible, auditable, rollback-able environments

Start with:

curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash

And if you hit errors, the Troubleshooting section above covers the five most common ones with exact fixes.

The GitHub repo is at github.com/NVIDIA/NemoClaw. It is actively developed and worth watching.