Tesla seeks next-gen HBM4 samples from Samsung and SK Hynix.
- HBM4 chips poised to power Tesla’s advanced AI ambitions
- Dojo supercomputer to integrate Tesla’s high-performance HBM4 chips
- Samsung and SK Hynix compete for Tesla’s AI memory chip orders
As the high-bandwidth memory (HBM) market continues to grow, projected to reach $33 billion by 2027, the competition between Samsung and SK Hynix intensifies.
Tesla is fanning the flames as it has reportedly reached out to both Samsung and SK Hynix, two of South Korea’s largest memory chipmakers, seeking samples of its next-generation HBM4 chips.
Now, a report from the Korean Economic Daily claims Tesla plans to evaluate these samples for potential integration into its custom-built Dojo supercomputer, a critical system designed to power the company’s AI ambitions, including its self-driving vehicle technology.
Tesla’s ambitious AI and HBM4 plans
The Dojo supercomputer, driven by Tesla’s proprietary D1 AI chip, helps train the neural networks required for its Full Self-Driving (FSD) feature. This latest request suggests that Tesla is gearing up to replace older HBM2e chips with the more advanced HBM4, which offers significant improvements in speed, power efficiency, and overall performance. The company is also expected to incorporate HBM4 chips into its AI data centers and future self-driving cars.
Samsung and SK Hynix, long-time rivals in the memory chip market, are both preparing prototypes of HBM4 chips for Tesla. These companies are also aggressively developing customized HBM4 solutions for major U.S. tech companies like Microsoft, Meta, and Google.
According to industry sources, SK Hynix remains the current leader in the high-bandwidth memory (HBM) market, supplying HBM3e chips to NVIDIA and holding a significant market share. However, Samsung is quickly closing the gap, forming partnerships with companies like Taiwan Semiconductor Manufacturing Company (TSMC) to produce key components for its HBM4 chips.
SK Hynix seems to have made progress with its HBM4 chip. The company claims that its solution delivers 1.4 times the bandwidth of HBM3e while consuming 30% less power. With a bandwidth expected to exceed 1.65 terabytes per second (TB/s) and reduced power consumption, the HBM4 chips offer the performance and efficiency needed to train massive AI models using Tesla’s Dojo supercomputer.
https://www.techradar.com/pro/tesla-emerges-as-surprising-rival-to-amd-and-nvidia-in-quest-to-grab-next-gen-hbm4-memory-for-ai-and-supercomputers#