Getting Started With NemoClaw: Install, Onboard, and Avoid the Obvious Mistakes
# Getting Started With NemoClaw: Install, Onboard, and Avoid the Obvious Mistakes
NemoClaw is attractive for the same reason most agent tooling is attractive: it promises to make capable AI assistants easier to run.
The difference is that NemoClaw also tries to make them less reckless.
If OpenClaw is the agent platform, NemoClaw is NVIDIA’s controlled wrapper around it. The setup couples OpenClaw with OpenShell, policy-based sandboxing, and NVIDIA-routed inference.
That sounds clean in a diagram.
In practice, getting started still means installing real infrastructure and understanding what the tool is trying to protect you from.
This guide walks through the current NemoClaw quickstart, what the commands are doing under the hood, and the mistakes most people will make on the first pass.
## What You Need Before You Start
According to the current NVIDIA README, NemoClaw is not a lightweight install.
Minimum requirements listed today:
- Ubuntu 22.04 LTS or later
- Node.js 20 or later
- npm 10 or later
- Docker installed and running
- OpenShell installed
- 4 vCPU minimum
- 8 GB RAM minimum, 16 GB recommended
- 20 GB free disk minimum, 40 GB recommended
There is also an operational note that matters: the sandbox image is around 2.4 GB compressed, and systems with less than 8 GB RAM can hit the OOM killer during image push. NVIDIA explicitly notes that swap can work around this, but more slowly.
Translation: don’t try this on a tiny starved box and then act surprised.
## Understand the Current Limitation First
NVIDIA’s README says NemoClaw currently requires a **fresh OpenClaw installation**.
That is the first thing to internalize.
If you already have a heavily customized OpenClaw setup, do not assume NemoClaw is a drop-in overlay. At this stage, it is closer to a curated environment than a patch.
That is not a flaw. It is just an alpha-stage constraint.
## The Install Path
The advertised install command is:
```bash
curl -fsSL https://nvidia.com/nemoclaw.sh | bash
```
Yes, that is convenient.
Yes, you should still treat any curl-to-bash install with the usual caution.
The NVIDIA docs say this flow installs Node.js if needed and then launches a guided onboarding wizard that creates the sandbox, configures inference, and applies policies.
So the command is not merely installing a CLI. It is bootstrapping the environment.
## What `nemoclaw onboard` Is Actually Doing
NVIDIA’s developer docs describe the onboarding flow as the main setup entrypoint.
Behind the scenes, the process looks roughly like this:
```text
nemoclaw onboard
-> plugin resolves blueprint
-> blueprint is verified
-> blueprint plans resources
-> OpenShell CLI creates sandbox and policies
-> OpenClaw runs inside the sandbox
```
That means onboarding is where most of the real work happens.
The plugin itself is intentionally thin. The heavy logic lives in a versioned blueprint artifact that handles sandbox creation, policy application, and inference configuration.
This is actually good news for users.
If the system works, it should be more reproducible than a pile of custom shell steps copied from Discord.
## What Success Looks Like
The README shows a post-install summary with a sandbox name, model selection, and a few next-step commands.
Example commands include:
```bash
nemoclaw my-assistant connect
nemoclaw my-assistant status
nemoclaw my-assistant logs --follow
```
Once connected, you can interact with OpenClaw inside the sandbox via either:
```bash
openclaw tui
```
or a direct one-shot CLI message such as:
```bash
openclaw agent --agent main --local -m "hello" --session-id test
```
That split is nice.
You get both a conversational TUI and a scriptable CLI path.
## The Real Point of the Sandbox
If you skip the architecture docs, you might think NemoClaw is mostly about easier setup.
That would miss the point.
The main value is containment.
The docs say the sandbox enforces:
- strict network policy
- filesystem isolation
- controlled inference routing
- approval-based handling for blocked hosts
The agent can write to `/sandbox` and `/tmp`. Other system paths are constrained.
For network, only policy-listed endpoints are allowed. If the agent tries to contact something else, OpenShell blocks it and surfaces the request.
That means your OpenClaw agent is not just roaming freely across the machine and internet.
That is the product.
The installer is just the delivery vehicle.
## Inference Setup: What Model Path You’re Choosing
The current README highlights NVIDIA cloud inference with:
- `nvidia/nemotron-3-super-120b-a12b`
It also says NemoClaw prompts for an NVIDIA API key during onboarding.
The docs frame inference routing as part of the control model: requests are intercepted and routed through OpenShell instead of leaving the sandbox directly.
That gives you a cleaner separation between:
- the agent runtime
- the provider backend
- the credentials and network path
Whether you care about privacy, governance, or just cleaner operations, that is better than scattering raw provider calls across the setup.
## Three Mistakes People Will Make Immediately
### 1. Treating NemoClaw like a casual local utility
It is not.
It is a real stack with Docker, OpenShell, policy configuration, and model routing.
If you go in expecting “npm install and vibes,” you will get frustrated.
### 2. Ignoring resource requirements
If your machine is underpowered, the problem is not that NemoClaw is broken. The problem is that you are trying to launch a sandboxed agent environment on hardware that barely wants to open Chrome.
### 3. Forgetting that blocked actions are a feature
If network egress gets blocked, that is not necessarily failure. It may be the system doing exactly what it is supposed to do.
A lot of users are conditioned to think friction equals bug. In sandboxed systems, friction often equals policy enforcement.
## Practical First-Run Checklist
If you are setting up NemoClaw today, use this order.
### Step 1: Verify prerequisites first
Check Node, npm, Docker, OpenShell, free disk, RAM, and swap.
### Step 2: Use a clean OpenClaw environment
Do not force it onto a hand-tuned existing install unless NVIDIA’s docs explicitly say that path is supported.
### Step 3: Run onboarding and watch the outputs carefully
Do not mash Enter through it like a desktop installer. This is infrastructure setup.
### Step 4: Test status before testing ambition
Run:
```bash
nemoclaw my-assistant status
```
before you ask the agent to do anything nontrivial.
### Step 5: Use a simple first command
Start with a minimal message or open the TUI. Confirm the environment is alive before adding more variables.
### Step 6: Treat policy prompts as signal
If a host gets blocked, inspect why. Do not blindly approve everything or you defeat the point of the runtime.
## When NemoClaw Makes Sense
NemoClaw is a good fit if you want:
- persistent OpenClaw agents
- tighter runtime boundaries
- explicit network and filesystem controls
- reproducible sandboxed setup
- a more governable inference path
It is a bad fit if you want:
- the absolute simplest toy install
- a zero-dependency local hack
- a stable enterprise platform today without alpha caveats
## Final Take
The getting-started story for NemoClaw is better than many early-stage agent projects because NVIDIA is at least honest about the stack.
It tells you there is a sandbox.
It tells you there is policy.
It tells you there are resource requirements.
It tells you the project is alpha.
Good.
That honesty is useful.
The practical takeaway is simple: approach NemoClaw like infrastructure, not like an app download.
If you do that, the setup path makes sense.
If you don’t, you’ll spend your first hour fighting requirements that were documented from the start.
## Research Notes
Primary sources reviewed:
- NVIDIA/NemoClaw GitHub README quickstart and requirements
- NVIDIA NemoClaw developer guide: How It Works
- NVIDIA NemoClaw overview page