NVIDIA Taps Taiwanese Nanya Technology’s LPDDR5X Memory For Vera Rubin Platform : US Pioneer Global VC DIFCHQ SFO NYC Singapore – Riyadh Swiss Our Mind

Offering 3x Capacity & Over 50% Bandwidth Boost

Taiwan-based manufacturer is reportedly the country’s first to offer LPDDR5X memory for NVIDIA Vera Rubin platforms as the green team diversifies its supply chain to power its Agentic AI powerhouse.

NVIDIA Diversifies Its Supply Chain, Adding Taiwan-Based Memory Maker For Vera Rubin’s LPDDR5X Solution

NVIDIA will require a lot of memory, both low-power and high-bandwidth, to fuel the growing needs of Agentic AI with its Vera Rubin platforms.

Related Story Intel Is Now Selling CPU Dies It Used to Throw in the Trash, as AI Demand Turns Scrap Into Profits

We know that NVIDIA’s Vera Rubin makes use of two types of memory. The Vera CPUs use LPDDR5X DRAM while Rubin GPUs use HBM4 DRAM. Both of these have different purposes. HBM4 is compact, offering much higher bandwidth, while LPDDR5X is power efficient and offers higher densities. While HBM4 is harder to produce and only a few major firms, SamsungSK Hynix, and Micron, can produce it, LPDDR memory is widely deployed.

This allows NVIDIA to diversify its supply chain partners. We know that Micron and SK Hynix have developed SOCAMM2 memory for Vera Rubin platforms using their LPDDR5X solution, but NVIDIA is searching for more supply chain partners.

Based on a report from UDN, it looks like NVIDIA has found one in Taiwan. Nanya Technology is a Taiwan-based memory manufacturer that produces LPDDR5/LPDDR5X memory and has been selected as a supply chain partner for NVIDIA’s Vera Rubin.

Nanya Technology has become the first Taiwanese manufacturer to enter NVIDIA’s AI server main memory system, breaking the previous dominance of Korean and American companies and marking a new milestone for Taiwan’s memory industry.

UDN

This is a big win for Nanya Tech, as most Taiwanese memory manufacturers who were involved in PC memory businesses were unable to meet the specifications for AI platforms. To address this matter, TSMC guided local companies to help with manufacturing and process optimizations. And we can see the result today with Nanya Technologies being selected to produce memory for the fastest next-gen AI system on the planet.

NVIDIA Vera Rubin platforms with LPDDR5X will deliver a major boost over Grace Blackwell (GB300) servers. Each Vera Rubin Superchip will pack 1.5 TB of memory operating at 1.2 TB/s speeds, a 3x increase in capacity and a 50% increase in bandwidth versus the previous generation. The Vera CPUs will also be deployed for rack-scale AI, offering 256 Vera chips per rack, up to 400 TB of memory, and up to 315 TB/s of bandwidth.

Vera is just as crucial as Rubin as Agentic AI shifts the core focus from GPUs to CPUs. With growing CPU demands and requiring more memory, despite KV cache being compressed by 90% in newer models, the need to have multiple global supply chain partners is crucial, and NVIDIA has made the right moves to meet its demand by selecting Taiwan-based memory makers in addition to its global partners.

https://wccftech.com/nvidia-taps-taiwanese-nanya-tech-lpddr5x-memory-for-vera-rubin-platform/amp/

NVIDIA Vera CPU Rack NVIDIA Vera CPU
Configuration 256 Vera CPUs 1 Vera CPU
Cores | Threads 22,528 NVIDIA Olympus cores
45,056 threads
88 NVIDIA Olympus cores
176 threads
Memory Capacity Up to 400 TB Up to 1.5 TB
Aggregate Bandwidth Up to 315 TB/s Up to 1.2 TB/s
N/S Networking NVIDIA BlueField-4 DPU N/A
Cooling Liquid Cooled N/A