Artificial intelligence is reshaping the landscape of computing today.
Most companies use Kubernetes to automate the deployment, scaling, and management of AI workloads in containers worldwide to manage high-performance AI infrastructure more transparently and efficiently. NVIDIA is donating an important software tool, the NVIDIA Dynamic Resource Assignment (DRA) driver for GPUs, to the Cloud Native Computing Foundation. CNCF is an independent organization that supports the cloud-native ecosystem.
Announced today at KubeCon Europe in Amsterdam, this donation moves driver management from NVIDIA to Kubernetes. The Kubernetes community welcomes wider contributions and faster improvements for cloud technologies. Collaboration with the Kubernetes and CNCF communities to upstream the NVIDIA DRA driver for GPUs constitutes a major milestone for open-source Kubernetes and AI infrastructure, said Chris Aniszczyk, Chief Technology Officer of CNCF. Through aligning its hardware innovations with upstream Kubernetes and AI conformance efforts, NVIDIA is making high-performance GPU orchestration effortless and accessible to all.
NVIDIA has also worked with CNCF to add GPU support for Kata containers. Kata containers are lightweight virtual machines that behave like regular containers while providing the isolation benefits of traditional virtual machines, offering improved security for workloads. This enables hardware acceleration in a more secure environment and makes it easier for organizations to adopt confidential computing, helping keep their data safe during processing.
Simplifying AI Infrastructure
In the past, managing GPUs for AI in data centers was complex. Now, efforts aim to make high-performance computing easier to use. Developers will benefit in several ways:
- Improved efficiency: the driver shares GPU resources more effectively, leveraging NVIDIA Multi-Process Service and Multi-Instance GPU technologies. The driver natively connects systems, including through NVIDIA multi-node NVLink technology; this is key for training large AI models on NVIDIA Grace Blackwell systems and other advanced AI infrastructure.
- Flexibility: developers can adjust their hardware setup as needed, changing how resources are used at any time.
- Precision: Users request precise computing, memory, or connection resources as needed.
NVIDIA collaborates with major industry players to advance these features for the cloud-native community, ensuring they are at the core of every successful enterprise. “AI strategy bringing standardization to the high-performance infrastructure components that fuel production AI workloads,” said Chris Wright, Chief Technology Officer and Senior Vice President of Global Engineering at Red Hat. NVIDIA’s donation of the NVIDIA DRA driver for GPUs helps to cement the role of open source in AI’s evolution, and we look forward to collaborating with NVIDIA and the wider community within the Kubernetes ecosystem.
Open-source software and its communities are a foundation of the infrastructure used for scientific computing and research, said Ricardo Rocha, lead of Platforms Infrastructure at CERN. For organizations like CERN, efficiently analyzing petabytes of data is essential to discovery, and community-driven innovation accelerates science. NVIDIA’s donation of the DRA driver supports the ecosystem that researchers depend on for both traditional scientific computing and machine learning workloads.
Expanding the Open Source Horizon
NVIDIA introduced the CLAW reference stack, which provides a standardized set of software for running AI infrastructure, and the NVIDIA OpenShell runtime, which lets users securely run autonomous agents programs that perform tasks independently. OpenShell offers security and privacy controls and works directly with Linux (an open-source operating system), eBPF (a technology for running programs inside the operating system kernel), and Kubernetes.
NVIDIA also announced today that its high-performance AI workload scheduler, the KAI scheduler, is now a CNCF sandbox project. This is an important step to encourage more collaboration and ensure the technology grows with the needs of the cloud-native community. Developers and firms can start using and contributing to the KAI scheduler now.
NVIDIA reaffirms its commitment to maintaining and contributing to Kubernetes and CNCF projects. This ongoing involvement helps meet the high demands of enterprise AI customers.
After releasing Dynamo 1.0, NVIDIA expands the Dynamo ecosystem with Groove, an open-source Kubernetes API (application programming interface, a set of tools for building software) for managing AI on GPUs. Integrated with the LLM D inference stack a platform for running large language model inference it lets developers describe complex inference (the process of running AI models to get results) in a single resource.
With these advances, companies can begin using and contributing to the NVIDIA DRA driver today.
To experience these technologies firsthand, visit the NVIDIA booth at KubeCon for live demonstrations.









