Kyndryl (NYSC: KD), a top provider of enterprise technology services, has launched Agentic service. This new offering brings together maturity-model structured adjustments and implementation blueprints to help businesses move from traditional service operations to intelligent automated workflows. Agentic service management also assesses how well organizations comply with new industry standards and governance for AI‑native environments, making it easier for customers to adopt reliable Atlantic AI-managed IT services.  

Most current IT systems were not built for agentic AI, creating a gap between AI capabilities and what firms can actually support. The Kyndryl readiness report shows that even though over two-thirds of organizations are investing in AI, almost half have not seen strong results. This is often because their governance workflows and controls are still based on older pre-AI models.  

Most enterprise environments were built for people managing tickets and tools, not for groups of self-governing agents handling tasks across hybrid and multi-cloud systems. This mismatch is stopping AI from moving beyond pilot projects, said Kris Lovejoy, global head of strategy at Kyndryl. You can’t scale agentic workflows on top of models designed for manual work, an organization scheme, clear controls with equitable practices, and measurable steps for adoption, so AI agents can work independently where it makes sense, while people stay responsible for governance, this, and service results.  

Kyndryl’s Agentic Service Management is built on decades of experience managing essential infrastructure for thousands of organizations. It leverages its own intellectual property and adds agentic air to its service operations. Kyndryl helps organizations move from AI innovation to full readiness for real-world use.  

Creating a Roadmap for Agentic IT Service Management Maturity 

Kyndryl Consult offers the Agentic Service Management Maturity Assessment. It helps organizations understand the current state and gaps in service management, AI governance, security, and operations. This assessment lets customers compare their policies, controls, and workflows against relevant standards such as ISO 42001. After the assessment, Kyndryl provides a customized gap analysis and a step-by-step plan. Customers can then adopt agent-based IT service management responsibly, using safeguards and human monitoring to support autonomous functions in cloud-native and AI-native environments.  

Kyndryl Agentic AI Digital Trust is also available as a separate service. It supports Agentic Service Management and helps businesses manage rail. Reduce it and expand Agentique AI deployment across hybrid and multi-cloud environments. This service provides a security-focused framework for managing how AI agents operate, especially in regulated industries where data protection, compliance, and classification are critical.  

Applying Agent AI to IT Service Delivery 

Kyndryl is transforming Red Service Value with Agentic Service Management. With Kyndryl Bridge, many of these services are already available. They help customers gain better analysis and support for important systems. Kyndryl’s Agentic AI builds on its automation platform. This platform now runs almost 200 million automated monthly tasks using over 8,000 certified playbooks.  

Take the next step in transforming your IT operations discover the advantages of Kyndryl Agentic Service Management by visiting our website today.  

About Kyndryl 

Kyndryl (NYSE: KD) ranks among the top providers of essential enterprise technology services. The company advises, implements, and manages services for thousands of customers in over 60 countries. As the world’s largest IT infrastructure services provider, Kyndryl designs, builds, and manages complex information systems that people rely on every day. For more details, visit kyndryl 

Source: « Back Kyndryl launches Agentic Service Management to power AI-native infrastructure services and intelligent workflows 

News Summary 

The NVIDIA Vera Rubin Platform leads the next era of AI with integrated features:  

  • Vera Rubin NVL72 GPU Racks  
  • Vera CPU racks (servers with central processing units for handling calculations)  
  • NVIDIA Groq 3 LPX inference accelerator racks (systems designed to speed up AI model predictions)  
  • NVIDIA Bluefield 4 STX storage racks (high-speed storage and networking hardware)  
  • NVIDIA Spectrum 6 SPX Ethernet Racks (Advanced Switches for Fast Data Networking)  

Following the introduction of the platform’s key features, Nvidia made a significant announcement at GTC: the Nvidia Vera Rubin platform is initiating a new chapter in Agentic AI. Seven new chips are now in full production to help scale the world’s largest AI factories.  

The platform features the Vera CPU, Rubin GPU, NVLink 6 switch, ConnectX9 SuperNIC, BlueField DPU, Spectrum 6 Ethernet switch, and Growth 3 LPU. These work as a unified AI supercomputer, powering all stages of AI, from pre-training to real-time agentic inference.  

Vera Rubin represents a leap with seven breakthrough chips, five racks, and one supercomputer powering every AI phase, said Jensen Huang, founder and CEO of Nvidia. The agentic AI inflection point has arrived, and Vera Rubin is driving historic infrastructure growth.  

Enterprises and developers are using cloud for increasingly intricate reasoning, agentic workflows, and mission‑critical decisions that demand infrastructure that can keep pace, said Dario Amodei, CEO and co‑founder of Anthropic. NVIDIA’s platform provides the compute, the network, and system design to keep delivering while improving the safety and reliability our customers depend on.  

“Nvidia infrastructure is the foundation that lets us keep advancing the frontier of AI,” said Sam Altman, CEO of OpenAI. With Nvidia, Vera Rubin will run more powerful models and agents at a massive scale, delivering faster, more reliable systems to hundreds of millions of people. AI infrastructure is changing quickly, moving from separate chips and standalone servers to fully integrated Rack Scale systems, POD-scale deployments, AI factories, and sovereign AI. These changes are leading to big improvements in performance and cost efficiency for organizations of all sizes and industries, from startups and mid-sized businesses to public and private institutions and enterprises. They also help make AI easier to use and improve energy efficiency for the world’s most challenging workloads.  

By integrating compute, networking, and storage—with support from over 80 Nvidia NGX partners—Vera Rubin offers a unified, extensive POD-scale platform comprising multiple AI racks working together as one system.  

NVIDIA Vera Rubin NVL 722 Rack  

The Vera Rubin NVL 72 connects 72 Rubin GPUs and 36 Vera CPUs for efficient large model training, requiring only a quarter of the GPUs used by Blackwell and delivering up to 10x higher inference throughput per watt and lower cost per token. It’s built for hyperscale AI factories, reducing both training time and costs.  

NVIDIA Vera CPU Rack 

Reinforcement learning and agentic AI workloads rely on many CPU-based environments. These environments test and validate the model’s results. They are running on GPU systems.  

The Nvidia Vera CPU RAC offers a dense, liquid-cooled setup based on Nvidia MGX. It includes 256 Vera CPUs, providing scalable, power-efficient capacity with top single-core performance, enabling large-scale agentic AI.  

Integrated with Spectrum X networking, Vera CPU racks keep environments synchronized across the AI factory. Paired with GPU racks, they form the CPU base for large-scale agentic AI and reinforcement learning, delivering results twice as efficiently and 50% faster than traditional CPUs.  

NVIDIA Groq 3 LPS Rack 

The Nvidia Groq 3 LPX is a major step forward in accelerated computing, built for the fast, large-scale needs of API and genetic systems. LPX and Vera Rubin combine their high performance to deliver up to 35 times more inference throughput per megawatt and up to 10 times more revenue potential for trillion-parameter models.  

When scaled up, many LPUs can work together as a single large processor. This speeds up inference tasks. The LPX rack includes 256 LPU processors, 128 GB of on-chip SRAM, and 60 TB of bandwidth when used with Vera Rubin MVL 72. Rubin, GPUs, and LPUs work together to process every layer of the AI model. They handle each output token.  

The LPX architecture is built for trillion-parameter models and million-token contexts. It works with Vera Rubin to maximize power, memory, and computing resources. Its higher throughput per watt and better token performance open up new possibilities for advanced inference. These also mean more revenue for AI providers; with full liquid cooling and MGX infrastructure, LPX will fit easily into the next generation of Vera Rubin AI factories. It will be available later this year.  

NVIDIA Bluefield 4 STX Storage Rack 

The NVIDIA Bluefield for STX Rack Scale system is an AI storage solution. It extends GPU memory across the POD, combining the Vera CPU and ConnectX-9 SuperNIC for high-bandwidth storage and retrieval of key-value cache data.  

NVIDIA DOCA memos are a new system that improves BlueField 4 storage. It enables dedicated KV cache storage processing, boosting MPs’ inference throughput by up to 5x and making power use much more efficient than with general-purpose storage. These changes lead to faster multi-turn interactions with AI agents. AI services become more scalable, and infrastructure is used more effectively across the POD.  

The Nvidia BlueField 4 STX Rack Scale Context Memory Storage System will enable a critical performance boost needed to exponentially scale our Agentic AI efforts, said Timothee Lacroix, co-founder and chief technology officer of Mistral AI. By delivering a new storage tier purpose-built for an AI agency’s memory, STX is ideally positioned to ensure our models can retain logic and speed when reasoning across large datasets.  

NVIDIA Spectrum 6 SPX Ethernet Rack  

Spectrum 6 SPX Ethernet accelerates data movement between the AI factory, using Spectrum X Ethernet or NVX. Quantum X800 InfiniBand switches ensure high-speed rack connections at scale.  

Spectrum X, Ethernet, and Photonics use co-packaged optics. It offers up to five times better optical power efficiency and ten times greater resiliency than traditional pluggable transceivers.  

Improved Resiliency And Energy Efficiency 

NVIDIA, along with more than 200 data center partners, has announced the NVIDIA DSX platform for Vera Rubin. DSX Max Q dynamically manages power across the entire AI factory, enabling data centers to deploy 30% more AI infrastructure without increasing power consumption. The new DSX Flex software also helps AI factories use grid power more flexibly, unlocking 100 gigawatts of unused grid power. We released the Vera Rubin DSX AI factory reference design, a blueprint for code-signed AI infrastructure that maximizes tokens per watt and overall goodput, improving system resiliency and accelerating time-to-first-production.  

By integrating compute, networking, storage, power, and cooling, Vera Rubin’s architecture boosts energy efficiency, scales reliably under heavy workloads, and maintains high uptime for AI factories.  

Broad Ecosystem Support 

Partners will start offering Vera Rubin–based products in the second half of this year. These products will be available through major cloud providers like Amazon Web Services, Google Cloud, Microsoft Azure, and Oracle Cloud Infrastructure, as well as NVIDIA cloud partners such as CoreWeave, Cursoe, Lambada, Nebius, NScale, and Together AI.  

Within this broad ecosystem, global system manufacturers such as Cisco, Dell Technologies, HPE, Lenovo, and Supermicro are expected to offer a variety of servers built with Vera Rubin products. In addition, other companies such as Aivres, Asus, Foxconn, Gigabyte, Inventec, Pegatron, Quanta Cloud Technology (QCT), Wistron, and Wiwynn will also provide these servers.  

On this foundation, leading AI labs and developers, including Anthropic, Meta, Mistral AI, and OpenAI, are embracing Vera Rubino to train larger models and accelerate long-context multimodal systems, aiming for greater speed and efficiency. 

Source: NVIDIA Vera Rubin Opens Agentic AI Frontier 

Samsung Electronics has launched the stable version of its browser for Windows, expanding its ecosystem beyond mobile devices. Released on March 26, 2026, this update turns Samsung Browser into a cross-platform tool. Automation now keeps user activity in sync across phones, tablets, and PCs, ensuring continuity as users move between devices. With built-in digital assistance, Samsung goes beyond basic data syncing, aiming to keep users’ work, shopping, or planning organized and accessible across the entire Galaxy ecosystem.  

The Evolution of Session Continuity 

The main feature of this update is a new continuity engine for maintaining browser sessions across devices. Instead of syncing only bookmarks and history, Samsung Browser for Windows preserves session details, including open tabs, scroll position, and page interactions. This means you can start browsing on your phone and seamlessly continue on your desktop exactly where you left off. The Samsung Continuity Service enables this session transfer on signed-in devices using encrypted local syncing, keeping user activities consistent in real time.  

For the professional traversing complex research or the traveler coordinating a multi-stop itinerary. This eliminates the administrative friction of reconstructing one’s place in a digital task. This feature is helpful for professionals working on research or travelers planning trips, as it removes the hassle of finding where they left off. For example, if someone is comparing products on their phone while commuting, the Windows browser will show a continuum from mobile prompt when they return to their task. This smooth transition makes the browsing experience feel continuous, so users can resume right where they left off throughout the day. The built-in Ask AI assistant is designed to understand the specific context of the open tabs. This assistant can integrate information from multiple sources simultaneously, enabling multi-tab context awareness. For example, a user planning a business trip can ask the browser to compare hotel reviews and conference locations. Access four different open tabs and generate a consolidated summary without ever leaving the primary window.  

With these new features, the browser can now take on complex tasks that previously required human intervention. For example, users can ask it to find the exact moment in a video when a speaker discusses thermal architecture, or to create a four-day travel plan using last week’s browsed websites. The browser’s natural language processing searches through browsing history, so users no longer need to recall specific keywords or dates. Acting as a smart assistant, it finds and organizes information based on user needs, seamlessly continuing the support introduced earlier.  

Security and Identity Through Samsung Pass 

As more of our digital life happens across different devices, it’s very important to keep our identities safe. Samsung handles this by adding Samsung Pass to the Windows browser. This lets users access their passwords, addresses, and payment details on their PCs with the same security as on their Galaxy phones. Nox extra strong protection (KEEP) creates a safe barrier between the browser and the computer system, keeping private data separate and safe at a basic level. 

The unified identity layer is key to making automated features work smoothly. For example, when the browser fills out a profile for a new service or retrieves a saved loyalty card, it needs a secure, reliable link to the user’s vault, which is managed through the Samsung account system. Samsung offers a zero-trust approach, enabling users to use their devices with ease without compromising security. People can switch between devices while knowing their personal information is both synced and protected by dedicated hardware.  

Optimized Performance for Desktop Ecosystems 

Bringing the browser to Windows means it now has a look designed for larger screens and desktop use. While the phone version is known for being easy to use, the desktop version uses a simple layout that makes handling tabs and add-ons easy. The new thumbnail tab view lets users see big pictures of their open tabs on both PCs and phones, showing a clear view of their activity. 

The personal data engine (PDE) in the latest Galaxy Book and S26 series enhances performance by handling more automated tasks directly on the device. This minimizes the delays commonly experienced with cloud-based assistance by processing data and interpreting language locally, keeping the browser fast. Even during multitasking. This emphasis on speed ensures users experience automation in real time, allowing them to continue working or creating without waiting for pages to load.  

The Silent Architecture of Tomorrow 

As these smart features expand across devices, the tools we use will collaborate seamlessly. The browser is becoming more than just a window for online activity. it is a partner, remembering details and making interactions more efficient. Eventually, actions like searching and completing tasks will merge, with the browser understanding both past behavior and user goals. Future syncing will be seamless, making digital experiences smooth and connected, guided by technology that anticipates and assists user needs.

Source: Samsung Takes Its Browser Beyond Mobile, Extending Agentic AI Across Devices 

News Summary 

  • The NVIDIA Vera Rubin platform is leading the way into the next AI era with:  
  • Vera Rubin NVL 72 GPU Raks  
  • Vera CPU racks, GPU3, LPX racks for faster AI answers, Bluefield 4, STX, storage racks, 4 data, and Spectrum X. SPX Ethernet racks for connecting everything.  

At GTC, Nvidia announced that the Nvidia Vera Rubin platform is launching the next phase of agentic AI: 7 new chains. Chips are now in full production to help scale the world’s largest AI factories.  

The platform brings together the Vera CPU, the Ruben GPU, a fast Ethernet link switch, a special network card, the BlueField 4 processing chip, a Spectrum 6 switch for network connections, and the new GROC3 chip for fast inference. All these parts work together as one powerful computer for every task, from training to giving instant answers.  

 There is a generational gap: seven breakthrough chips, five racks, one giant supercomputer all built to power every phase of AI, said Jensen Huang, founder and CEO of Nvidia. The agentic AI inflection point has arrived, with Vera Rubin kicking off one of the greatest infrastructures build-outs in history.  

Enterprises and developers are using cloud for more intricate reasoning, agentic workflows, and mission-critical decisions that require infrastructure we have that can keep up, said Dario Amodei, CEO and co‑founder of Anthropic. NVIDIA’s Vera Rubin platform provides the compute, networking, and system design we need to continue delivering safe and reliable solutions for our customers.  

“Nvidia infrastructure is the foundation that lets us keep advancing AI,” said Sam Altman, CEO of OpenAI, with Nvidia. Vera Rubin will run more powerful models and agents at scale, delivering faster, more reliable systems to hundreds of millions of people.  

Shift To Pod-Scale Systems 

AI tools are evolving rip rapidly, moving from separate components to integrated systems and large-scale AI centers. These changes make AI tools evolve rapidly, moving from separate components to integrated systems and large-scale AI centers. These changes make AI fine. faster and cheaper for all types of organizations. They also make AI easier to use and consume less power.  

With close integration among computing, networking, and storage, and support from over 80 partners, where Rubin is the largest platform. For large-scale AI systems, it brings many AI racks together into a single system.  

NVIDIA, Vera Rubin, NVL 72 

The Vera Rubin NBL72 commands 72 Rubin GPUs and 36 Vera CPUs connected by NVLink 6. Plus, ConnectX9 SuperNics and BlueField 4 DPUs. This setup trains large mixture-of-experts models using only a quarter of the GPUs needed for the NVIDIA Blackwell platform and delivers up to 10× higher inference throughput per watt at one-tenth of the cost per token.  

NVL72 is built for large-scale AI factories worldwide. It works smoothly with NVIDIA Quantum X800. InfiniBand and Spectrum X Ethernet keep graphics processing unit clusters highly utilized while reducing training time and overall costs.  

NVIDIA Vera CPU Rack 

Testing AI often requires many CPUs to verify results from GPU systems.  

The Vera CPU rack is compact and uses liquid cooling. It has 256 CPUs, making it both powerful and energy-saving for running big AI projects.  

With Spectrum X networking, CPU racks stay in sync in the AI center. Together with GPU RACs, they help AI run faster and more efficiently than older CPUs.  

NVIDIA Groq 3 Lpx Rack 

NVIDIA GROC 3 LPX makes AI much faster. LPX works with Vera Rubin to give up to 35% more output for the same power and up to 10× more business value with huge models  

Many LPUs together act as one big processor for fast answers. The LPX rack has 256 LPUs, ample built-in memory, and high data speeds used with NBL72. They share the job of solving each part of AI tasks.  

LPX is designed for large AI models with substantial data. It makes computing more efficient and lets providers offer better AI services. It’s fully liquid code and will be part of the new Vera Rubin AI Centers later this year.  

NVIDIA Blue Field – 4 RTX Storage Rack 

The BlueField‑4 STX system is made for AI storage, helping expand GPU memory. Powered by BlueField‑4, it provides a fast way to store and find large amounts of AI data.  

NVIDIA DOCA memos – a new DOCA framework that enhances BlueField for storage, allowing dedicated KV cache storage processing. This boosts inference throughput by up to 5x and greatly improves power efficiency compared to general-purpose storage. As a result, the system provides POD-wide context to enable faster multi-turn interactions with AI agents, more scalable AI services, and better overall infrastructure utilization. The Four EGG4 STX rack-scale context memory storage system will enable a critical performance boost needed to exponentially scale our agentic AI efforts, said Timothee Lacroix, co-founder and chief technology officer of Mistral AI. By delivering a new storage tier purpose-built for AI agents and memory, STX is ideally positioned to ensure our model can maintain coherence and speed when reasoning across large datasets.  

NVIDIA Spectrum 6 SPX Ethernet Rack 

Spectrum-6 SPX Ethernet moves data quickly inside AI centers. It can be used with other fast network switches to provide quick, reliable connections between systems.  

Spectrum-X Ethernet Photonics uses light to transmit data, making it five times more energy efficient and ten times more powerful than conventional methods.  

Improving Resiliency and Energy Efficiency 

NVIDIA and its partners announced the DSX platform for VeraRubin. DSX Max-Q enables the AI center to use 30% more systems in the same power envelope. DSXFlex helps AI centers use unused grid power.  

By closely integrating compute, networking, storage, power, and cooling, the architecture boosts energy efficiency and helps factories scale up under constant dense workloads with maximum uptime.  

Broad Ecosystem Support 

Vera Rubin-based products will be available from partners in the second half of this year. This includes top cloud providers like Amazon Web Services, Google Cloud, Microsoft Azure, and Oracle Cloud Infrastructure, as well as Nvidia Cloud partners such as CoreWeave, Cursoe, Lambada, Nibius, NScale, and Together AI.  

Global systems manufacturers like Cisco, Dell Technologies, HPE, Lenovo, and Supermicro are expected to offer a wide range of servers based on Vera Rubin products. Other partners include Avarice, SS, Foxconn, Gigabyte, Inventec, Pegatron, Qanta, Cloud Technology (QCT), Wistron, and Wiwynn.  

AI labs and leading model developers such as Anthropic, Meta, Mistral AI, and OpenAI plan to use the Nvidia Vera Rubin platform to train larger, more advanced models. They intend to deliver long-context multimodal systems with lower latency and lower cost than previous GPU generations.

SourceNVIDIA Vera Rubin Opens Agentic AI Frontier 

Samsung Electronics released Samsung Browser for Windows, bringing its popular mobile browser to PC. The new version offers cross-device browsing and advanced AI features for smoother navigation.  

Browse Seamlessly From Mobile to PC 

Samsung Browser for Windows enables seamless transitions between devices, allowing uninterrupted browsing from mobile to PC beyond just bookmarks and history synchronization. It lets users instantly resume where they left off.  

For example, users can switch devices and keep viewing the same page. This ensures a seamless browsing experience.  

Samsung Pass integration enables users to securely store personal data and quickly sign in to websites using autofill profiles.  

A New Way To Experience The Web With Agentic AI 

Samsung is introducing an AI-enhanced assistant into the Samsung browser, developed with complexity. The browser interprets natural language, comprehends page context, and tracks activity across tabs. This facilitates easier content exploration and task execution. The AI does more than respond to queries; it manages tabs, tracks history, and boosts productivity within the browser.  

  • Samsung Browser interprets web page context to deliver tailored solutions. For example, if you are organizing a trip to Seoul, you can instruct the browser to generate a 4-day itinerary based on the current page. The browser will analyze the content and create a customizable plan in your preferred format.  
  • A faster, smarter way to search: Samsung Browser uses advanced natural language understanding for quick, efficient browsing. Find information right away no more sorting through endless web pages. The browser applies the same smart tech to videos because it understands video context and can jump straight to your desired point.  
  • Retrieve the right page from history using natural language rather than keywords or dates. For example, find the smartwatch you viewed last week by describing it.  
  • Multi-tab context awareness means you don’t need to click through tabs to compare information. Samsung Browser summarizes and compares content from different tabs, bringing key insights together.  

Availability 

Samsung Browser for Windows runs on Windows 11 and Windows 10 version 1809 or later. Argentic AI features work in South Korea and the United States. More markets will be added soon. Learn more at browser.samsung.com.

Source: Samsung Takes Its Browser Beyond Mobile, Extending Agentic AI Across Devices 

News Summary 

  • NVIDIA Nemotron 3 models enable AI agents to engage in natural conversation, perform intricate reasoning, and leverage advanced visual features.  
  • NVIDIA, ISAAC, GR00T, N1.7, Alpamayo1.5, and Cosmos 3 improve physical AI reasoning and actions for robots and self-driving vehicles. (GR00T stands for Generalist Robot Reasoning Observation and Ontology Transformer. VLA means Visual and Language Action, which refers to processing and linking visual input, language, and actions in AI systems). 
  • The Proteina Complexa model in N-Media, BioNeMo speeds protein drug discovery. It includes a new dataset of millions of AI-protected protein complexes, created in collaboration with Google DeepMind, EMBL, and Seoul National University in the United States. Quality.  
  • Companies like CodeRabbit, CrowdStrike, Cursor, Factory, ServiceNow, and Perplexity use NVIDIA open models for agentic AI. LG Electronics and Milestone Systems use them for physical AI. Novo Nordisk, Viva Biotech, and Manifold Bio use them for healthcare and AI.  

At GTC, NVIDIA announced the expansion of its model families, positioning these new models as foundational to the next generation of agentic physical and healthcare AI. This initiative is designed to help developers and scientists build intelligent systems that reason and act effectively in digital and real-world scenarios.  

Open models are essential drivers of global innovation. NVIDIA’s expanding suite includes Nemotron (Agentic Systems), Cosmos (Physical AI), Alpamayo (Self-Driving Vehicles), ISAAC, GR00T (Robotics), and BioNemo (Biomedical research), all of which are central to unlocking new abilities across industries.  

“Open source AI drives global innovation,” said Kari Vriski, Vice President of Generative AI Software at NVIDIA. NVIDIA’s open model families broaden intelligence across biology, robotics, and autonomous machines, empowering developers to build smart agents that advance both digital and physical industries.  

NVIDIA, NemoTron 3, Ultra, Omni, and VoiceChat Models Power AI Agents 

The NVIDIA NemoTron family now includes models for language, vision, voice, and safety, helping developers build specialized agentic AI.  

NVIDIA, Nemotron 3 multimodal models support natural conversation, intricate reasoning, and advanced skills for AI agents.  

  • NEMOTRON 3 ULTRA offers advanced intelligence to boost productivity. The NVF P4 format (a data format developed by NVIDIA for efficient processing and storage) on the NVIDIA Blackwell platform delivers 5 times greater efficiency. Key benefits include support for AI-native applications such as coding assistance, faster research, and complex workflow automation.  
  • NEMOTRON 3-OMNI integrates audio, vision, and language, enabling AI agents to efficiently extract actionable insights from videos and documents, aiding decision-making and saving time.  
  • NEMOTRON 3 voice chat enables real-time conversations where AI can listen and respond simultaneously. It brings together speech recognition, language processing, and text-to-speech into a single system.  
  • Nemotrons, safety models, and retrieval tools enhance trust in the multimodal system. They detect unsafe content in text and images, and agentic retrieval improves the accuracy of results.  

Langchain now uses NVIDIA, Nemotron models, and other NVIDIA Agent Toolkit software in its agent development platform. This helps businesses build, deploy, and monitor smart AI assistants that can automate complex tasks at scale.  

Companies such as Automation Anywhere, Code Rabbit, CrowdStrike, Cursor, Factory, Distil, GenSpark, Perplexity, and ServiceNow use NVIDIA NemoTron models for advanced agentic applications. Edison Scientific applies NVIDIA NemoTron in Kosmos, an autonomous AI scientist that supports over 50,000 researchers in completing hundreds of research tasks simultaneously, significantly reducing months of work to a single day.  

AI developers use NemoTron to create sovereign models for billions of people across languages and cultures. Organizations include AI Singapore, Bielik.AI, Indosat, Ooredoo, Hutchison, Linagora, Soofi, Stockmark, Trillian Labs, Viettel, and YTL AI Labs.  

NVIDIA released Nemotron personas: privacy-focused synthetic datasets based on census and demographic data. The France dataset generated with Pleias is now available, along with datasets for the US, Japan, India, Brazil, and Singapore. NVIDIA also accelerates self-driving development with new models and simulation tools, helping robots and vehicles reason and act in the real world.  

  • NVIDIA Cosmos 3 the first world foundation model uniting world generation, physical AI reasoning, and action simulation arrives soon to help physical AI in complex settings.  
  • NVIDIA ISAC GR00TN1.7 is an open source reasoning vision-language-action (VLA) model, where VLA stands for Visual and Language Action, designed for humanoids. It is now ready for actual use.  
  • NVIDIA Alpamayo 1.5 is a VLA model that boosts autonomous vehicle reasoning, offering navigation guidance, prompt conditioning, flexible multi-camera support, and adjustable camera settings.  

At GTC, NVIDIA CEO Jensen Huang previewed GR00T-N2, a next-generation robot foundation model based on Dream Zero research. GR00T-N2 completes new tasks in new environments over twice as often as top VLA models. It leads in Malmo Spaces and RoboArena for generalist robot policies and is expected by year’s end.  

Companies like HCL Tech, Johnson & Johnson, MedTech, Milestone Systems, Mimic Robotics, Skilled AI, Tulip, and Toyota Research Institute use NVIDIA Cosmos to accelerate physical AI training and video analytics. Humanoid LG Electronics Neura and Noble machines use NVIDIA ISAC GR00T N1.7 for deploying humanoid robots.  

Open Models Accelerate Healthcare and Life Sciences Research 

NVIDIA is advancing AI-driven discovery in healthcare and life sciences with open, multimodal-based models and datasets. These tools speed up biomedical research, drug discovery, medical imaging, and the understanding of scientific literature and life sciences, helping researchers develop new knowledge and model, design, and simulate biological systems at scale.  

Proteina-Complexa is a generative model for designing protein binders, speeding up structure-based drug discovery and therapy development. NOVO, Nordisk, Viva Biotic, and Manifold Bio use it to design and test proteins that bind to target proteins.  

NVIDIA worked with EMBL, Google DeepMind, and Seoul National University to expand the AlphaFold protein structure database, adding about 30 million protein-complex predictions, including 1.7 million high-confidence entries. This accelerates drug-target discovery and understanding of disease biology.  

NVIDIA also launched NVQSP, a GPU-accelerated simulation engine that allows pharmaceutical researchers to test many more treatment scenarios in computer models before clinical trials. In tests, it was up to 77 times faster than traditional single-threaded CPU simulations, letting some scientists analyze hundreds of treatment levels and patient groups in the time it used to take to simulate just a few.  

Availability 

Some NVIDIA open models, data, and frameworks are available on GitHub, Hugging Face, various cloud and AI platforms, and build.nvidia.com.  

Many models are also offered as NVIDIA NIM microservices. These enable secure, scalable deployment across any NVIDIA-accelerated infrastructure, from edge devices to the cloud.

Source: NVIDIA Expands Open Model Families to Power the Next Wave of Agentic, Physical and Healthcare AI 

OpenAI is moving from a commerce system to agentic AI using an operator and its computer with a CUA model. The operator can independently control a computer to complete multi-step tasks, enabling AI to interact with websites and apps on the user’s behalf.  

With this new paradigm in mind, consider the following outline of OpenAI’s vision for autonomous computer tasks.  

To provide better context, let’s start by focusing on the first key area:  

  • An operator is an AI agent designed to take control of a user’s web browser and eventually their computer to handle repetitive or complex tasks.  
  • The operator uses a computer running the agent CUA model, which combines GPT-4’s Visual Reasoning with Reinforcement Learning. Unlike older automation tools that require API interfaces, CUA can view the screen via screenshots and interact with graphical interfaces, as a person does.  
  • An operator can fill out forms, order groceries, do research, create memes, and schedule appointments.  

The Shift to Agentic AI 

  • Operator denotes a shift from a checkbox that only talks to agents who can take action. It is built to manage long, multi-step tasks with little need for people to step in.  
  • A new ChatGPT Agent feature lets AI use a virtual computer to check calendars, book restaurants, and make slide decks.  
  • Once you set a goal, agents work on their own. For example, it could plan a weekend trip.  

Present Constraints & Safety 

  • The operator is still in the research stage and is primarily available to pro users in the US.  
  • The AI pauses for human approval before any action that can be undone, like sending emails or deleting calendar events.  

The agent can sometimes get stuck on streaky interfaces, capture or password fields, so it may need help from a person.  

Future Outlook discusses the upcoming directions and possibilities for the Operator platform and agentic AI. 

  • OpenAI plans to expand the operator to the Plus team and Enterprise users.  
  • OpenAI positions Customer Agents as a Foundation for Progress Towards Artificial General Intelligence (AGI).  
  • The aim is to move from a single tool to an ecosystem in which agents work independently across multiple systems seven times.  

The move to agentic AI is part of a broader trend in 2025, with companies like Anthropic and Google building similar capabilities.  

At the start of this year, OpenAI CEO Sam Altman predicted 2025 would be pivotal for AI agents—tools that automate tasks and act on users’ behalf.  

Building on this vision, OpenAI is now making its first real move in this area.  

OpenAI has announced a research preview of Operator, an AI agent that controls a web browser and autonomously performs tasks. It will initially be available to US users with ChatGPT’s Pro subscription and will expand to Plus, Team, and Enterprise plans, with dates to be announced.  

Operators will be available in other countries soon, though a specific launch date has not been announced. OpenAI CEO Sam Altman said during a live stream on Thursday that Europe will, unfortunately, take a while.  

Currently, the research preview is at operator.chatgpt.com. OpenAI plans to add Operator to all ChatGPT clients soon. The operator promises to automate tasks such as booking travel, making reservations, and shopping. The interface offers categories such as shopping, delivery, dining, and travel for different automations.  

When users activate Operator in ChatGPT, a dedicated web browser opens, allowing the agent to complete tasks and explain its actions. Users still control their own screen, as Operator operates in its own browser.   

OpenAI explains the browser runs on a computer using an agent or CUA model, combining GPT-4o’s vision skills with advanced reasoning. The CUA attempts to interact directly with website interfaces, bypassing developer APIs.  

This allows the CUA to click, navigate menus, and fill forms on web pages much like a person.  

OpenAI says it’s collaborating with companies like DoorDash, eBay, Instacart, Priceline, StubHub, and Uber to ensure operators comply with their terms of service.  

The CUA model is trained to ask for user confirmation before finalizing tasks with external side effects, for example, before submitting an order or sending an email, so that the user can recheck the model’s work before it becomes permanent. Open-air rights in materials provided for death crimes. It has already proven useful in a variety of cases, and we aim to extend that dependability across a wider range of tasks.  

But OpenAI warns the CUA isn’t perfect. The company says it doesn’t expect the CUA to perform reliably in all scenarios just yet.  

Currently, the operator cannot consistently handle many complex or specialized tasks. OpenAI adds support for tasks such as creating detailed slide shows, overseeing intricate calendar systems, or interacting with highly customized or non-standard web interfaces.  

To be extra careful, OpenAI requires users to supervise certain tasks, such as banking transactions, even though the CUA and operator could handle them on their own. For example, users must enter credit card information themselves. OpenAI also says the operator does not color or screenshot any data.  

On particularly sensitive websites, such as email, the operator requires active user supervision, guaranteeing users can directly catch and handle any potential mistakes the model might make, OpenAI says in its support materials.  

This does limit what an operator can do, but it also helps prevent mistakes like the agent accidentally spending your mortgage payment on edgy accent chairs. Google has taken a similar approach with its Project Marina AI agent, which also avoids entering sensitive information such as credit card numbers.  

Limitations 

The operator does have some important limitations.  

There are both daily and task-based rate limits. OpenAI says an operator can handle seven tasks at once, but there are dynamic limits on how many. There is also an AU total-usage limit that resets each day.  

For security reasons, the operator will not perform certain tasks at this stage, such as sending emails or deleting calendar events, even though the CUA can. OpenAI says this may change in the future, but there is no timeline yet.  

An operator can also get stuck if it encounters a complex interface, a password field, or a captcha. When this happens, it will prompt the user to take over.  

An Agentic Future 

Compared with competitors like Rabbit, Google, and Android, OpenAI has taken longer to develop an AI agent. This may be due to the technology’s safety risks.  

When an AI system can take actions on the web, it opens the door to much more dangerous use cases from nefarious actors. You could automate AI agents who orchestrate phishing scams or DDoS attacks or have them snatch up tickets to a concert before anyone else can. Especially for a tool as widely used as ChatGPT, it’s important that OpenAI takes steps to prepare for such exploits.  

OpenAI believes the operator is safe enough to release now, at least as a result review.  

The operator employs tools that seek to limit the model’s susceptibility to malicious prompts, consent instructions, and prompt injection. OpenAI explains on its website that a monitoring system triggers action if suspicious activity is detected, while automated and human-reviewed pipelines continuously update safety balances.  

Operator is OpenAI’s most ambitious effort so far to create an AI agent. Lastly, OpenAI launched Tasks, which gave ChatGPT basic automation features such as creating reminders and scheduling prompts to run at specific times each day. Tasks added some familiar but important features to ChatGPT, making it as practical as Siri or Alexa. However, the Operator introduces capabilities that earlier virtual assistants could not offer in AI. After ChatGPT, a new technology that will change how people use the internet and their PCs. Instead of simply delivering and processing information, agents can, in theory, take actions and actually do things.  

Now that OpenAI has released its first real AI agent, we will soon see how realistic this vision actually is.

Source: OpenAI launches Operator, an AI agent that performs tasks autonomously