Artificial intelligence chips from Advanced Micro Devices (AMD.O) are about 80% as fast as those from Nvidia Corp (NVDA.O), with a future path to matching their performance, according to a Friday report by an AI software firm. Nvidia dominates the market for the powerful chips used to create ChatGPT and other AI services that have swept through the technology industry in recent months. Their popularity has pushed Nvidia’s value past $1 trillion, leading to a shortage that the company is working to resolve.
Nvidia’s chips are critical to many popular services that depend on large language models and cutting-edge AI algorithms. But the chipmaker’s dominance has created a massive opportunity for competitors. AMD’s latest offerings, AI accelerators, aim to compete with Nvidia’s products in the data center. The company expects to take a chunk of the $150 billion market for data center AI accelerators this year, and it hopes to expand that share over time as demand grows and more apps run on AI hardware.
The latest offering, MI300X, is a server-level chip that can handle demanding training tasks and execute AI models. It will be available to sample this fall and start shipping in more significant volumes next year. The design is based on new architectures that AMD developed with help from its acquisition of Xilinx. It uses a memory-intensive XDNA feature that can support up to 400 trillion operations per second for AI computing.
AMD also announced a set of tools to make it easier for developers to use its chips. It consolidates previously disparate software stacks for its GPU, CPU, and adaptive processors into a unique interface called the Unified AI Stack. This should allow developers to code in one language for all AMD chips.
AMD’s stock fell 3% on Tuesday, even though its product announcement was a significant positive for the chipmaker. Its shares are up 82% this year, as investors hope it can win business from Nvidia in the booming market for AI acceleration chips while boosting sales of its main business chips. The chipmaker’s upcoming AI accelerators will include two versions of its latest Vega GPUs, designed for high-performance gaming. One variant, MI300C, will be all GPU, while the other, MI300A, will combine Vega GPUs with SK Hynix 16Gb HBM stacks. The latter would be targeted at the AI hyper scalers, who deploy tens of thousands of servers and have the most demanding needs for AI processing. Both will be available early next year. AMD will also release a server-level variant of the MI300X, with more GPUs and higher density SK Hynix HBM stacks. That configuration is aimed at the hyperscalers, who will deploy eight MI300X chips with two Genoa CPUs for maximum performance in AI training. AMD expects to offer this variant in the fourth quarter. The company will also launch an AI accelerator based on its 96-core Zen4 processor with 2GB of HBM in the first half.