Getting Started With NemoClaw: Install, Onboard, and Avoid the Obvious Mistakes
# Getting Started With NemoClaw: Install, Onboard, and Avoid the Obvious Mistakes
NemoClaw is attractive for the exact same reason most modern agent tooling is attractive: it promises to make highly capable AI assistants drastically easier to run, deploy, and manage on local or cloud hardware.
The critical difference is that NemoClaw also tries to make them significantly less reckless.
If OpenClaw is the core agent platform—providing the reasoning loop, the tool-use capabilities, and the conversational interfaces—NemoClaw is NVIDIA’s hardened, controlled wrapper around it. The setup explicitly couples OpenClaw with OpenShell, policy-based sandboxing, and NVIDIA-routed inference pipelines.
That sounds incredibly clean and enterprise-ready in an architecture diagram.
In practice, getting started still means installing real infrastructure and fundamentally understanding what the tool is trying to protect you from. You are not just unzipping an executable; you are provisioning a micro-environment.
This comprehensive guide walks through the current NemoClaw quickstart, what the installation commands are actually doing under the hood, the architecture you are deploying, and the obvious mistakes most developers will make on their first pass.
## What You Need Before You Start
According to the current NVIDIA README, NemoClaw is absolutely not a lightweight install. If you are used to zero-dependency binaries that run in megabytes of memory, you need to adjust your expectations immediately.
The minimum requirements listed today highlight its enterprise and infrastructure-first nature:
- Ubuntu 22.04 LTS or later (or a highly compatible Linux distribution)
- Node.js 20 or later (required for the orchestration scripts)
- npm 10 or later
- Docker installed, configured, and running (the core of the sandbox containerization)
- OpenShell installed (the security and policy enforcement layer)
- 4 vCPU minimum (to handle the orchestration overhead and Node environments)
- 8 GB RAM minimum, 16 GB highly recommended
- 20 GB free disk minimum, 40 GB recommended for image caching and logs
There is also a crucial operational note that matters for desktop users and budget VPS deployments: the sandbox image is around 2.4 GB compressed. During the image pull and extraction phase, systems with less than 8 GB RAM can easily hit the Linux Out-Of-Memory (OOM) killer. NVIDIA explicitly notes that enabling swap space can work around this issue, but it will significantly slow down the provisioning process due to disk thrashing.
Translation: don’t try to run this on a tiny starved $5/month virtual private server and then act surprised when the kernel kills the process. Give it the resources it asks for.
## Understand the Current Limitation First
NVIDIA’s README explicitly states that NemoClaw currently requires a **fresh OpenClaw installation**.
That is the first conceptual hurdle you must internalize.
If you already have a heavily customized OpenClaw setup on your machine—perhaps with a dozen custom skills, specific local environment variables, and modified core files—do not assume NemoClaw is a drop-in overlay that you can just slide underneath your existing setup. At this stage of development, it is closer to a curated, monolithic environment than a modular patch.
This is not a flaw in the design. It is simply an alpha-stage constraint. NVIDIA is ensuring that the baseline environment matches their strict security and policy expectations before they open the doors to migrating legacy or highly modified agent workspaces.
## The Install Path: Convenience Meets Infrastructure
The advertised install command is the industry standard (and heavily debated) shell script pipe:
```bash
curl -fsSL https://nvidia.com/nemoclaw.sh | bash
Yes, that is highly convenient. It gets you from zero to deployed in minutes.
Yes, you should still treat any curl-to-bash installation script with the usual engineering caution. You are downloading a script from the internet and executing it with high privileges on your system.
The NVIDIA documentation notes that this flow does quite a bit of heavy lifting: it installs Node.js if your system is lacking the correct version, sets up base directories, and then launches a guided onboarding wizard. This wizard is responsible for creating the sandbox, configuring the inference provider, and applying the initial security policies.
So, when you run this command, you are not merely installing a Command Line Interface (CLI). You are bootstrapping an entire containerized environment.
## What `nemoclaw onboard` Is Actually Doing
NVIDIA’s developer documentation describes the onboarding flow as the primary setup entrypoint.
Behind the scenes, the process looks roughly like this sequence of operations:
```text
nemoclaw onboard
-> plugin resolves the required blueprint
-> blueprint signature and version is verified
-> blueprint plans the necessary system resources
-> OpenShell CLI creates the Docker sandbox and applies policies
-> OpenClaw is injected and runs inside the isolated sandbox
That means the onboarding step is where most of the actual computational and architectural work happens.
The NemoClaw plugin itself is intentionally kept thin. The heavy logic lives in a versioned "blueprint" artifact. This blueprint is a declarative configuration file that handles the exact specifications for sandbox creation, the granular policy application (what the agent can and cannot do), and the inference configuration.
This architecture is actually excellent news for users and system administrators.
If the blueprint system works as intended, it makes agent deployment infinitely more reproducible than a fragile pile of custom shell scripts copied and pasted from a Discord support channel. You can version-control a blueprint. You cannot easily version-control a messy server history.
## The Role of OpenShell in NemoClaw's Architecture
To truly understand NemoClaw, you have to understand OpenShell's role in the stack. NemoClaw doesn't invent its own container runtime; it relies heavily on OpenShell to act as the enforcer.
OpenShell is a secure execution layer designed specifically for AI workloads. When NemoClaw deploys OpenClaw, it places it inside an OpenShell-managed sandbox.
Why does this matter? Because AI agents write code, execute scripts, and download files from the internet. A raw Docker container provides namespace isolation, but an agent running as root inside a basic Docker container can still cause havoc or be used as a pivot point if the container is misconfigured.
OpenShell provides a hardened boundary. It intercepts system calls, enforces strict network egress rules (e.g., "this agent can only talk to github.com and api.openai.com"), and monitors filesystem access. If the OpenClaw agent hallucinates a command to delete the root directory or attempts to curl a malicious payload from an unknown IP address, OpenShell catches the violation at the system level and denies the request based on the NemoClaw blueprint policies.
## What Success Looks Like
Once the installation and onboarding complete, the README shows a post-install summary. This usually includes a newly generated sandbox name, your chosen model selection, and a few next-step commands to get you moving.
Example commands to manage your new environment include:
```bash
nemoclaw my-assistant connect
nemoclaw my-assistant status
nemoclaw my-assistant logs --follow
```
Once connected to the running sandbox, you can interact with the OpenClaw instance inside via either the terminal UI:
```bash
openclaw tui
```
or via a direct one-shot CLI message, which is perfect for automation and CI/CD pipelines:
```bash
openclaw agent --agent main --local -m "Summarize the latest logs" --session-id test
```
That operational split is a massive benefit. You get both a conversational Terminal UI (TUI) for debugging and interactive work, and a strictly scriptable CLI path for programmatic integration.
## The Real Point of the Sandbox
If you skip reading the architecture documentation, you might mistakenly think NemoClaw is mostly about providing an easier setup script for OpenClaw.
That would completely miss the point of the project.
The primary value proposition of NemoClaw is **containment**.
The documentation highlights that the sandbox strictly enforces:
- **Strict network policy:** Egress is denied by default.
- **Filesystem isolation:** The host filesystem is protected.
- **Controlled inference routing:** API keys and provider logic are kept out of the agent's reach.
- **Approval-based handling:** Blocked actions can be surfaced to the human operator for explicit approval.
By default, the agent is restricted to writing to specific ephemeral directories like `/sandbox` and `/tmp`. All other system paths are heavily constrained or entirely read-only.
For network access, only endpoints explicitly listed in the policy blueprint are allowed. If the agent goes rogue and tries to contact an unapproved server, OpenShell drops the connection, blocks the process, and surfaces the security request to the user logs.
That means your OpenClaw agent is not just roaming freely across your workstation and the open internet. It operates in a glass box.
That containment is the actual product. The installer is merely the delivery vehicle.
## Inference Setup: What Model Path You’re Choosing
The current README strongly highlights using NVIDIA cloud inference, specifically targeting high-performance reasoning models like:
- `nvidia/nemotron-3-super-120b-a12b`
During the `nemoclaw onboard` phase, the system prompts for an NVIDIA API key.
The documentation frames this inference routing as a core part of the control model. Instead of the agent making raw, direct HTTP calls to the LLM provider from inside its workspace, requests are intercepted and safely routed through the OpenShell layer.
This gives operators a much cleaner separation of concerns between:
- The agent runtime (executing code, using tools)
- The provider backend (generating the tokens)
- The credentials and network path (held securely outside the agent's memory space)
Whether you care about enterprise privacy, governance compliance, or just prefer cleaner, more modular operations, this separation is vastly superior to scattering raw provider API keys in plaintext `.env` files inside the agent's working directory.
## Day Two Operations: Managing Your Sandboxed Agent
Getting NemoClaw installed is day one. Day two is about keeping it running smoothly. Because the agent lives in a sandbox, your normal operational habits need to adjust.
**Log Management:**
Since the agent runs in a containerized OpenShell environment, you cannot just `tail -f /var/log/syslog`. You must use the NemoClaw CLI to stream logs. Getting comfortable with `nemoclaw [assistant-name] logs --follow` is essential for debugging when the agent inevitably fails a task or hits a policy boundary.
**Updating Policies:**
As your agent's use cases expand, it will inevitably need access to new websites or tools. You will need to learn how to update the OpenShell policies safely. Rather than globally disabling the firewall, you will iteratively add specific domains to the allowlist. This creates a highly customized, least-privilege environment tailored exactly to what the agent needs to do and nothing more.
**Handling Persistent State:**
Because `/tmp` and other sandbox directories might be ephemeral depending on your container configuration, you need to be mindful of where the agent saves important files. Understanding how NemoClaw mounts volumes and handles persistent workspace directories ensures you don't lose valuable agent outputs when the sandbox restarts.
## Three Mistakes People Will Make Immediately
### 1. Treating NemoClaw like a casual local utility
It is not a simple Node.js CLI tool. It is a real infrastructure stack comprised of Docker, OpenShell, complex policy configurations, and managed model routing. If you go into the installation expecting an "npm install and good vibes" experience, you will get frustrated when Docker permissions fail or OpenShell blocks a basic HTTP request.
### 2. Ignoring resource requirements
If your machine is underpowered, the problem is not that NemoClaw is broken or badly optimized. The problem is that you are trying to launch an entire orchestrated, sandboxed agent environment on hardware that barely wants to open Google Chrome. You need RAM, and you need CPU overhead.
### 3. Forgetting that blocked actions are a feature, not a bug
If network egress gets blocked by the sandbox, that is not necessarily a failure state. It is highly likely the system is doing exactly what it is supposed to do. A lot of users are conditioned to think that any friction equals a software bug. In heavily sandboxed systems, friction often equals successful policy enforcement. Learn to read the policy violation logs rather than immediately filing a GitHub issue.
## Practical First-Run Checklist
If you are setting up NemoClaw today, follow this exact order of operations to save yourself hours of debugging.
### Step 1: Verify prerequisites thoroughly first
Do not skip this. Check your Node version (`node -v`), ensure npm is updated, verify Docker is running and your user has docker group permissions, check OpenShell, and use `htop` or `free -m` to confirm you actually have 8GB of free RAM and sufficient swap space.
### Step 2: Use a clean OpenClaw environment
Do not attempt to force NemoClaw onto a hand-tuned, heavily customized existing OpenClaw install. Start fresh, unless NVIDIA’s official documentation explicitly states that a migration path is fully supported.
### Step 3: Run onboarding and watch the outputs carefully
Run `nemoclaw onboard`. Do not just mash the `Enter` key through the wizard like a standard desktop application installer. Read the prompts. Note the sandbox names and the policies it states it is applying. This is infrastructure setup.
### Step 4: Test status before testing ambition
Before you ask the agent to write a full web application, ensure the environment is actually healthy. Run:
```bash
nemoclaw my-assistant status
```
Ensure the container is running and the proxy is connected.
### Step 5: Use a simple first command
Start with a minimal message or simply open the TUI. Ask the agent "What is your working directory?" Confirm the environment is alive, responding, and can access its designated filesystem before adding more complex variables.
### Step 6: Treat policy prompts as a security signal
If a host gets blocked or a command is denied, inspect *why*. Do not blindly approve everything or automatically disable the firewall, or you completely defeat the purpose of using NemoClaw in the first place.
## When NemoClaw Makes Sense
NemoClaw is an incredibly good fit if you want:
- Persistent, long-running OpenClaw agents.
- Tighter, enterprise-grade runtime boundaries.
- Explicit, readable network and filesystem controls.
- A highly reproducible, sandboxed setup process.
- A more governable, secure inference path for API keys.
It is a remarkably bad fit if you want:
- The absolute simplest toy installation to play with for 5 minutes.
- A zero-dependency local hack script.
- A completely stable, bug-free enterprise platform today (it comes with alpha caveats).
## Frequently Asked Questions (FAQ)
**Q: Can I use Podman instead of Docker for the sandbox?**
Currently, the NemoClaw scripts and OpenShell integration rely heavily on the Docker daemon and standard Docker socket behaviors. While advanced users might manage to alias Podman to Docker, the officially supported and tested path requires Docker. Using Podman may result in unexpected networking or volume mount failures during onboarding.
**Q: Do I absolutely have to use NVIDIA's models?**
While the onboarding flow heavily promotes NVIDIA's Nemotron models (and prompts for an NVIDIA API key), the underlying OpenClaw architecture is model-agnostic. However, NemoClaw's specific inference routing and proxying features are optimized for their stack out of the box. Changing providers requires editing the blueprint or OpenClaw config manually post-install.
**Q: How do I persist files if the agent is sandboxed?**
The sandbox restricts access to the host machine, but it mounts specific workspace directories (usually within the NemoClaw setup folder) into the container. Anything the agent writes to its designated `/sandbox` workspace is preserved on your host machine in that mapped directory. If the container is destroyed and recreated with the same blueprint, that workspace volume is reattached.
**Q: OpenShell is blocking the agent from downloading an npm package. How do I fix this?**
This is the sandbox working as intended. The default policy likely restricts egress to unknown package registries. You must inspect the NemoClaw policy files (associated with your blueprint) and add the specific registry (e.g., `registry.npmjs.org`) to the network allowlist, then apply the updated policy.
**Q: Can I run NemoClaw entirely offline?**
Bootstrapping NemoClaw requires an internet connection to pull the Docker images, download Node dependencies, and fetch the blueprints. Once the environment is fully provisioned and the container is running, the agent *can* operate offline, provided you are routing inference to a local model (like Ollama or a local vLLM instance) rather than a cloud provider API.
## Conclusion
The getting-started story for NemoClaw is fundamentally better than many early-stage AI agent projects because NVIDIA is at least honest about the complexity of the stack.
They tell you upfront there is a sandbox. They explicitly tell you there is a strict policy engine. They clearly outline the hefty resource requirements. They do not hide that the project is in alpha.
This honesty is incredibly useful for developers.
The practical takeaway is simple: you must approach NemoClaw like you are deploying backend infrastructure, not like you are downloading a simple mobile app. If you adopt that mindset, the setup path makes complete sense, and the security benefits become immediately obvious. If you don’t, you will spend your first hour fighting system requirements and security policies that were clearly documented from the start. Embrace the sandbox, respect the resource limits, and enjoy a much safer agent experience.