NVIDIA's next-gen R100 AI GPU: TSMC 3nm with CoWoS-L packaging, next-gen HBM4 in Q4 2025

NVIDIA's next-generation AI chip -- the new R-series and R100 AI GPU -- will enter mass production in Q4 2025, TSMC N3 node, CoWoS-L packaging.

Published
Updated
1 minute & 48 seconds read time

NVIDIA is still cooking its new Blackwell GPU architecture and B200 AI GPU, and while we've had teases of the next-gen Vera Rubin GPU, now we're hearing the next-gen R100 AI GPU will be in mass production in Q4 2025.

NVIDIA's next-gen R100 AI GPU: TSMC 3nm with CoWoS-L packaging, next-gen HBM4 in Q4 2025 404

In a new post by industry insider Ming-Chi Kuo, NVIDIA's next-generation AI chip will enter mass production in Q4 2025 with the R-series and R100 AI GPU, with the system/rack solution to enter mass production in Q1 2026. NVIDIA's next-gen R100 will be made on TSMC's newer N3 process node, compared to B100 which uses TSMC N4P, with R100 using TSMC's newer CoWoS-L packaging (the same as B100).

NVIDIA's next-gen R100 AI GPU features around 4x reticle design, compared to the B100 with 3.3x reticle design, while the interposer size for R100 "has yet to be finalized," with Ming-Chi saying there are 2-3 options. R100 will feature 8 x HBM4 units, while GR200's new Grace CPU will use TSMC's N3 process (compared to TSMC's N5 for GH200 and GB200's Grace CPUs).

Ming-Chi says that NVIDIA realizes the power consumption of AI servers has "become a challenge" for customers' procurement and data center construction. This means that NVIDIA's next-gen R-series AI chips and systems focus on improving power consumption in addition to next levels of AI computing power.

Ming-Chi's full post said:

  1. NVIDIA's next-generation AI chip, the R-series/R100 AI chip, will enter mass production in 4Q25, and the system/rack solution will likely start mass production in 1H26.
  2. R100 will use TSMC's N3 node (vs. TSMC's N4P for B100) and CoWoS-L packaging (same as B100).
  3. R100 adopts about 4x reticle design (vs. B100's 3.3x).
  4. The interposer size for R100 has yet to be finalized. There are 2-3 options.
  5. R100 will be equipped with eight HBM4 units.
  6. GR200's Grace CPU will use TSMC's N3 process (vs. TSMC's N5 for GH200 & GB200's Grace CPU).
  7. NVIDIA realizes that the power consumption of AI servers has become a challenge for customers' procurement and data center construction. Hence, the R-series chips and system solutions focus on improving power consumption in addition to enhancing AI computing power.
NEWS SOURCE:medium.com

Anthony joined the TweakTown team in 2010 and has since reviewed 100s of graphics cards. Anthony is a long time PC enthusiast with a passion of hate for games built around consoles. FPS gaming since the pre-Quake days, where you were insulted if you used a mouse to aim, he has been addicted to gaming and hardware ever since. Working in IT retail for 10 years gave him great experience with custom-built PCs. His addiction to GPU tech is unwavering and has recently taken a keen interest in artificial intelligence (AI) hardware.

Newsletter Subscription

Related Tags