Jensen Huang Announces That Nvidia’s Latest Vera Rubin Chips Have Entered ‘Full Production’

Jensen Huang Announces That Nvidia’s Latest Vera Rubin Chips Have Entered ‘Full Production’

Nvidia CEO Jensen Huang announced that the company’s upcoming AI superchip platform, Vera Rubin, is set to start shipping to customers later this year. “Today, I can tell you that Vera Rubin is in full production,” Huang stated during a press event at the CES technology trade show in Las Vegas on Monday.

Rubin aims to reduce the operational costs of AI models to about one-tenth of Nvidia’s current leading chip system, Blackwell, according to information shared with analysts and journalists on Sunday. The company also indicated that Rubin can train certain large models using approximately one-fourth the number of chips required for Blackwell. Together, these enhancements could significantly lower the operating costs of advanced AI systems, making it increasingly difficult for Nvidia’s customers to consider alternatives to its hardware.

During the call, Nvidia revealed that two of its current partners, Microsoft and CoreWeave, will be among the first to offer services powered by Rubin chips later this year. Microsoft’s two major AI data centers being built in Georgia and Wisconsin will eventually integrate thousands of Rubin chips, as Nvidia noted. Some of Nvidia’s partners have already begun running next-generation AI models on early Rubin systems, the company reported.

The semiconductor leader also mentioned its collaboration with Red Hat, which specializes in open-source enterprise software for various sectors including banks, automakers, airlines, and government agencies, to provide more products compatible with the new Rubin chip system.

Nvidia’s latest chip platform is named after Vera Rubin, a prominent American astronomer known for advancing our understanding of galaxies. The system features six distinct chips, including the Rubin GPU and a Vera CPU, both crafted using Taiwan Semiconductor Manufacturing Company’s 3-nanometer fabrication technology and the latest bandwidth memory innovations available. Nvidia’s sixth-generation interconnect and switching technologies facilitate the connection of these various chips.

Huang described each component of this chip system as “completely revolutionary and the best of its kind” during the company’s CES press briefing.

Nvidia has been working on the Rubin system for several years, with Huang first revealing their development during a keynote address in 2024. The company stated last year that systems based on Rubin would start arriving in the latter half of 2026.

The precise meaning of Nvidia’s statement regarding Vera Rubin being in “full production” remains somewhat unclear. Typically, chip production at this level—collaboratively developed with TSMC—begins at a low volume during testing and validation phases and increases in subsequent stages.

“This CES announcement surrounding Rubin is to reassure investors that we’re on track,” commented Austin Lyons, an analyst at Creative Strategists and author of the semiconductor newsletter Chipstrat. According to Lyons, there were whispers on Wall Street suggesting that the Rubin GPU was falling behind schedule, prompting Nvidia to assert that it has successfully completed critical development and testing milestones, while remaining confident that Rubin will begin ramping up production in the latter half of 2026.

In 2024, Nvidia was forced to postpone the delivery of its then-new Blackwell chips due to a design flaw that caused overheating issues when connected in server racks. By mid-2025, shipments for Blackwell were back on schedule.

As the AI industry continues its rapid growth, software companies and cloud service providers are competing intensely for access to Nvidia’s latest GPUs. It’s anticipated that demand for Rubin will be equally high. However, some firms are also diversifying their strategies by investing in customized chip designs. For instance, OpenAI has announced its collaboration with Broadcom to develop tailored silicon for its next generation of AI models. These collaborations present a long-term challenge for Nvidia: Customers who create their own chips may achieve a level of hardware control not available through Nvidia.

Nonetheless, Lyons emphasizes that today’s announcements illustrate how Nvidia is progressing beyond simply providing GPUs to becoming a “full AI system architect, covering compute, networking, memory hierarchy, storage, and software orchestration.” Even as hyperscalers invest heavily in custom silicon, he notes that Nvidia’s highly integrated platform “is becoming increasingly difficult to displace.”

https://in.linkedin.com/in/rajat-media

Helping D2C Brands Scale with AI-Powered Marketing & Automation 🚀 | $15M+ in Client Revenue | Meta Ads Expert | D2C Performance Marketing Consultant