AMD’s CEO Lisa Su Believes AI Data Center Accelerator Market Will Scale Up to $500 Billion By 2028, Driven By Demand For Inferencing : US Pioneer Global VC DIFCHQ SFO NYC Singapore – Riyadh Swiss Our Mind

AMD’s CEO has revealed massive optimism about the future of the data center segment, claiming that the demand for AI accelerators will only grow.

AMD Reveals Big Plans For The AI Accelerator Segment, Planning To Capitalize on a $500 Billion Market

AMD claims that there isn’t enough compute available in the market to process all the evolving use cases of AI, claiming that the markets should anticipate the firm’s AI/DC revenue to keep growing. At the Advancing AI keynote, AMD’s CEO Lisa Su revealed that the data center accelerator market is growing at a whopping 60% CAGR, and this figure is expected to remain steady over the upcoming years, which puts the valuation of the AI accelerator segment at $500 billion, opening up countless opportunities, not just for AMD, but competitors like NVIDIA as well. AI has a lot more room to grow, and by the looks of it, several new prospects are emerging for Big Tech.

Related Story AMD Shows Why High-End EPYC CPUs Are Important to Unlock the Full Potential of AI Accelerators — Without Them, Performance Suffers

The AI accelerator market will grow over time because artificial intelligence isn’t just limited to model training now. The technology has adopted multiple use cases, which demand computational power, which AI GPUs drive. AMD’s CEO says that AI has scaled beyond data centers and is used in cloud applications, edge AI, and client AI. All of these fields require accelerators to create the necessary computing power. As for which firm will capitalize on the accelerator demand, the competition is stepping up, especially after AMD’s recent announcements.

AMD has announced that they are specifically focusing on three different strategies to broaden its AI portfolio, notably creating leadership compute engines, an open ecosystem, and full-stack solutions, to ensure that its customers get everything by adopting Team Red’s AI stack. AMD launched its latest Instinct MI350 AI lineup on the compute engine side, equipped with a brand-new CDNA 3 architecture based on TSMC’s 3nm process node. They come with a massive HBM3E memory stack and feature up to 1400W of TDP with the flagship model, the MI355X. AMD says that they have reached parity with NVIDIA’s Blackwell in terms of performance.

Similarly, at the software ecosystem side of things, AMD revealed the new ROCm 7 software stack, including enhanced frameworks such as vLLM v1, llm-d, and SGLang, and also focuses on serving various optimizations. Here’s what to see with ROCm 7:

  • Latest Algorithms & Models
  • Advanced Features for Scaling AI
  • MI350 series support
  • Cluster Management
  • Enterprise Capabilities

Team Red is shaping up to show an aggressive approach in the AI segment, and rivaling NVIDIA, which has maintained a stronghold over the market for several years now.

https://wccftech.com/amd-ceo-believes-ai-data-center-accelerator-market-will-scale-up-to-500-billion-by-2028/amp/