NemoClaw can be installed with a single command, making it easy to add security and privacy for always‑on OpenClaw agents. It runs in the cloud, on premises, and on NVIDIA GeForce RTX PCs, NVIDIA DGX Station, and NVIDIA DGX Spark.  

At GTC, Nvidia announced the Nvidia NemoClaw stack for the OpenClaw Agent platform. With NemoClaw, users can install NVIDIA Nemotron models and the new NVIDIA OpenShell runtime in one step. This update adds privacy and security controls. As a result, self-evolving autonomous AI agents, called Claws, are now more trustworthy, scalable, and accessible.  

“OpenClaw opened the next frontier of AI to everyone and became the fastest growing open source project in history,” said Jensen Huang, Founder and CEO of Nvidia. Mac and Windows are operating systems for the personal computer. OpenClaw is the operating system for personal AI. This is the moment when the industry is very important. The industry has been waiting for the beginning of a new renaissance in software.  

OpenClaw brings people closer to AI and helps build a world where everyone has their own agents, said Peter Steinberger, creator of OpenClaw. With NVIDIA and the wider ecosystem, we are building the claws and guardrails that let everyone create powerful, secure AI assistants.  

NemoClaw uses the Nvidia Agent Toolkit to optimize OpenClaw with one command. It installs OpenShell, which offers open models and a secure sandbox. This protects data privacy and security for autonomous agents. The Name of Law adds an important infrastructure layer; it gives them the access they need to work well while enforcing security, network, and privacy rules.  

NemoClaw works with any coding agent. With open agents, it can use open models, including Nvidia and Nemo Tron, running locally on a user’s system through a privacy router. Agents can also access advanced models in the cloud by combining local and cloud models. Agents can develop new skills and complete tasks. They can still follow set privacy and security rules.  

Always-on agents need dedicated computing to build software, tools, and complete tasks. Name of claw for open claw can run on any dedicated platform, such as NVIDIA GE4s, RTX PCs and laptops, NVIDIA RTX Pro, Power World Workstations, and NVIDIA DGX Station or DGX Spark AI supercomputers. This setup enables local computation, allowing autonomous agents to run continuously. Stop by NVIDIA, build a Claw event in the GTC Park March 16 to 19 – 1 to 5 p.m. on Monday and 8 a.m. to 5 p.m. on Tuesday through Thursday to customize and deploy an active, always-on AI assistant with NemoClaw or for OpenClaw.  

SourceNVIDIA Announces NemoClaw for the OpenClaw Community 

NVIDIA and Marvel Technology Inc. (Nasdaq: MRVL) announced a new partnership. This partnership will connect Marvell to the NVIDIA AI Factory Open Bucket, a set of platforms and resources for developing Artificial intelligence solutions and the AI RAN ecosystem (a network system that uses artificial intelligence to manage and optimize radio access networks) using NVIDIA NVLink Fusion (a high-speed interconnect technology for data and workload sharing). It will give customers more options and flexibility when building next-generation infrastructure on Nvidia architectures. The companies also plan to work together on silicon photonics technology (using light to transfer data between computer chips).  

In addition to the partnership, Nvidia has invested $2 billion in Marvell, strengthening their collaboration.  

This partnership builds on NVIDIA NVLink Fusion and the ARC Scale platform. It allows customers to create semi-custom AI infrastructure within the NVIDIA NVLink ecosystem, including Marvel, Wheel Supply, custom XPUs, and networking compatible with NVIDIA Fusion. NVIDIA will provide supporting technologies, including the Vera CPU, ConnectX Nic’s, DPU’s, NVLink, Interconnect, Spectrum X switches, and rack-scale AI compute.  

For customers building custom CPUs, NV Link Fusion enables the creation of a mixed AI infrastructure that fully works with NVLink. This makes it easy to integrate with NVIDIA GPUs, LPUs, networking, and storage platforms. Customers can also leverage NVIDIA’s technology stack and global supply chain.  

The companies also plan to turn global telecommunication methods into an AI infrastructure. They will use Nvidia, Ariel, and AI-RAN for 5G and 6G. Their goal is to improve AI networking by introducing advanced optical interconnect solutions and silicon Photonics Technology.  

The inference infection has arrived. Token generation demand is surging, and the world is racing to build AI factories, said Jensen Kuang, founder and CEO of Nvidia. Together with Marvel, we are enabling customers to leverage Nvidia’s AI infrastructure ecosystem and scale to build specialized AI. Compute.  

Our expanded partnership with NVIDIA highlights the importance of air-scaling AI through high school connectivity, optical interconnect, and advanced interconnect infrastructure, said Matt Murphy, chairman and CEO of Marvell. By combining Marvell’s strengths in high-performance analog optical DSP silicon photonics and custom silicon with NVIDIA’s growing AI ecosystem through NVLink Fusion, we help customers build scalable yet efficient AI infrastructure.  

About Marvell 

We have created data infrastructure technology that has connected the world by building solutions for our customers for over 30 years. Top technology companies have also relied on us for semiconductor solutions to move, store, process, and secure data by working closely with our customers. We shape the future of enterprise cloud and carrier architectures.  

About NVIDIA 

NVIDIA (NASDAQ: NVDA) is the world leader in AI and accelerated computing.  

Marwell Forward Looking Statements 

Marvell and the M logo are trademarks of Marvell or its affiliates. Please visit www.marvell.com for a complete list of Marvell trademarks. Other names and brands may be claimed as the property of others. 

Source: NVIDIA AI Ecosystem Expands as Marvell Joins Forces Through NVLink Fusion