NVIDIA GTC 2026, scheduled for March 16-19 in San Jose, will shift focus from GPU power to full rack-scale AI systems, spotlighting Blackwell Ultra architectures for agentic AI and throughput inference.
Here are the main points about Blackwell Ultra Low-Power Optical Networking and Telco Reasoning Models for GTC 2026.
Blackwell Ultra and Network Innovations
- The system-focused AI column at GTC 2026 will highlight the move from counting individual cards to using rack-scale setups like NVL72 and NVL144, as well as the new NVL576, which will feature an orthogonal backplane design.
- Blackwell Ultra (GB300) capabilities: major cloud developers use these systems for low-latency, long-running tasks. They use NVLinkswitch for scaling and NVFP4 precision for efficient inference.
- LPO and Photonics debut as electrical internal interconnects hit their limits. NVIDIA is investing in optical connections, such as CPU and Photonics, for AI factories. Lumentum and Coherent are providing advanced CPUs to meet the high bandwidth demands of future AI systems.
U.S. Telco Reasoning Models and Agentic AI
- Agentic AI in Telco: NVIDIA is going beyond basic network automation and working on autonomous networks with telco reasoning models.
- Tool-Calling Agents: These models enable AI agents to understand incidents, search databases, and take corrective actions in a controlled, trackable way, replacing old, hand-coded runbooks.
- Industry partnerships: Telecom providers are working with NVIDIA to build 6G on Open Secure AI-based platforms. They are also using NVIDIA NEMO to fine-tune models for network operations center workflows.
GTC 2026 Highlights
- Keynote & Focus: CEO Jensen Huang will give the keynote on March 16, discussing the new AI Initiative software-defined infrastructure.
- Key themes: The event will feature Vera Rubin for Agentic AI, Rubin CPX for rapid-throughput inference, and a new AI-native storage system called ICMS.
- Sessions: Many sessions will focus on AI RAN, which brings AI to the edge of telecom networks.
The 2026 GTC event marks a move toward treating inference as a regular operating cost, with attention on metrics such as time to first token, tokens per second, and energy efficiency.
Telecommunications are quickly shifting toward autonomous networks, with 65% of operators viewing AI as essential for automation, according to the latest NVIDIA State of AI in Telecommunications report. Half also rank autonomous networks as the leading AI use case for return on investment.
However, many telecom companies still lack enough AI and data science expertise. This gap makes it hard to safely scale closed-loop automation across complex networks.
Most telecom NOCs use reactive alarm-based workflows. Engineers sift through numerous incidents with various tools, compiling data from different dashboards before resolving issues. NOCs are ideal for autonomous networks because the tasks are repeatable, allowing AI to reduce resolution time and costs.
Tech Mahindra, a global technology and consulting solutions provider, is working with NVIDIA to help close the AI skills gap. Together, they are turning autonomous network building blocks, such as open models, tools, and guides, into resources telecom developers can easily use and adapt in their own networks. This post explains how to fine-tune reasoning models with NVIDIA Nemo so they can work like NOC engineers and safely manage closed-loop self-reasoning workflows. It covers how to:
- Create synthetic incident data that closely matches real telecom scenarios.
- Translate/Export Procedures into Systemic Reasoning Traces using Production-Grade Reference Workflows. This step teaches the model to coordinate tools, reason about network state, and execute end-to-end fault management tasks during fine-tuning.
This approach gives telco teams a repeatable way to build their own AI agents for network operations. These agents can handle triage, root cause analysis, and resolution for many common incidents, helping operators move closer to TM Forum level 4, highly autonomous networks, and beyond.
Why Do Network Operations Centers Need Reasoning Models
Traditional NOC automation is mostly rule-based and open. Traditional NOC automation relies on rules and open-world scripts that trigger onset conditions. These scripts often struggle with noisy signals, cross-domain dependencies, and a system that can take on this work pattern in a controlled, auditable way. Instead of hard-coded runbooks and point scripts, the agent uses the model to interpret incidents, decide which tools to call, and adapt its actions based on live responses.
Main features include:
- AI reasoning in the tool-calling column takes over manual alarm triage by leveraging NOC tools for validation, root cause analysis, and issue resolution across current systems.
- End-to-End Automation: Manages alarm validation, root cause analysis, and resolution for various incident types, including outages, flaps, congestion, and configuration problems.
- Noise reduction, pull-on filters, self-clearing or low-value alarms that use historical patterns, so engineers can focus on higher priorities.
- Resolution in seconds, not hours: Cuts down the time needed to resolve common high-volume incidents from hours to just seconds, greatly lowering MTTR.
The result is a closed-loop self-healing network. NoC agents manage routine triage and resolution, allowing engineers to focus on proactive optimization and complex problem-solving.
Source: Building Telco Reasoning Models for Autonomous Networks with NVIDIA NeMo










