Advanced Micro Devices Inc. showcased its upcoming line of artificial intelligence processors, aiming to help data centers handle a crush of AI traffic and challenge Nvidia Corp.’s dominance in the burgeoning market.
The company’s Instinct MI300 series will include an accelerator that can speed processing for generative AI — the technology used by ChatGPT and other chatbots — AMD said during a presentation in San Francisco on Tuesday. The product, called MI300X, is part of a lineup that was unveiled at the CES conference in January.
Like much of the chip industry, AMD is racing to meet booming demand for AI computing. Popular services that rely on large language models — algorithms that crunch massive amounts of data in order to answer queries and generate images — are pushing data centers to the limit. And so far, Nvidia has had an edge in supplying the technology needed to handle these workloads.
“We are still very, very early in the life cycle of AI,” AMD Chief Executive Officer Lisa Su said at the event.
The total addressable market for data center AI accelerators will rise from $30 billion this year to more than $150 billion in 2027, she said. “It’s going to be a lot.”
Executives from Amazon.com Inc.’s AWS and Meta Platforms Inc. joined Su on stage to talk about using new AMD processors in their data centers. The chipmaker also announced the general availability of the latest version of its Epyc server processors and a new variant called Bergamo that is aimed at cloud computing uses.
The MI300X accelerator is based on AMD’s CDNA 3 technology and uses as much as 192 gigabytes of memory to handle workloads for large language models and generative AI, the Santa Clara, California-based company said.
Key customers will start sampling the technology in the third quarter, AMD said. Another model, the Instinct MI300A, is going out to customers now.