Rumor: NVIDIA Expected To produce Next-Gen R100 GPUs In Q4 2025 - Named after Vera Rubin

Published by

teaser
Nvidia is set to advance its AI technology with the introduction of its new R-series AI chip, known as R100. Scheduled for mass production in the fourth quarter of 2025, the R100 represents a strategic step forward in processing capabilities and energy efficiency for AI applications. Developed under the codename "Rubin," after the astronomer Vera Rubin, the architecture aims to address the growing power consumption concerns in high-performance computing environments. The R100 chip will be manufactured using Taiwan Semiconductor Manufacturing Company's (TSMC) advanced 3 nm EUV FinFET process, known as N3, marking a progression from the previous generation's N4P process used in the "Blackwell" B100 chip. This shift signifies a focus on enhancing computational efficiency and reducing power draw, which is critical as AI processing demands escalate.

Notably, the R100 will incorporate a 4x design, an increase from the B100’s 3.3x design, allowing for larger and more powerful chiplets. The packaging technology for the R100 will continue to use TSMC’s CoWoS-L (Chip-on-Wafer-on-Substrate with Large area), consistent with its predecessor. This packaging approach supports higher bandwidth and better energy efficiency, essential for handling the complex computations required in modern AI tasks. One of the key features of the R100 is its integration of eight High Bandwidth Memory 4 (HBM4) stacks, a cutting-edge memory solution that provides substantial bandwidth improvements necessary for advanced AI algorithms. The final specifications for the interposer, which connects the chiplets within the processor, are still under consideration with two to three configurations being evaluated.

The accompanying GR200's Grace CPU, also set to utilize the N3 process, illustrates a further commitment to enhancing the energy efficiency of Nvidia’s AI solutions. This newer model aims for an optical shrink that optimizes power usage without sacrificing performance.

From a broader perspective, Nvidia recognizes the challenges posed by the energy demands of AI servers in cloud service provider (CSP) and hyperscale data center environments. The R-series chips, therefore, are not merely designed to boost AI processing power but also to significantly lower power consumption. This dual focus is expected to make Nvidia's solutions more attractive for data center deployments, addressing both the technological needs and environmental impact concerns of large-scale operations.

5

Share this content
Twitter Facebook Reddit WhatsApp Email Print