NVIDIA sets the pace for the AI era

Record breaking results announced by NVIDIA have fuelled speculation of an AI goldrush. Days later, the company introduced platforms for generative AI at Computex 2023. Yole Group and its entities, Yole SystemPlus and Yole Intelligence charts the rise of AI and accelerated computing, its effects around the industry and concludes with a review of the Taipei show.

Today, Yole Group’s analysts, John Lorenz and Ying-Wu Liu come back to this significant announcement and analyse the impact on the overall industry. For the 1st time, a leading high-tech company commits to an impressive investment in AI. This shift will clearly influence the industry evolution and the overall supply chain… In addition, both experts give you a snapshot of Computex 2023.

This analysis has been developed with the following Yole Intelligence’s Processor Market Monitor and reports: Computing and AI for Datacenter  – Computing and AI for Automotive. And the following Yole SystemPlus’ reverse engineering & costing products: NVIDIA Tesla P100 Graphics Processing Unit (GPU) with HBM2, NVIDIA A100 Ampere GPU,

NVIDIA has transitioned from a graphics processor company to position itself as an AI company, delivering processors for supercomputers for generative AI, modular reference architecture for accelerated servers and networking technology to accelerate Ethernet-based AI clouds.

Yole SystemPlus reverse engineering & costing analyses focused on NVIDIA GPU – NVIDIA has been delivering the worlds most powerful graphic chips due to the processor chip that continually adopts an advanced technology node paired with TSMC’s innovative heterogeneous packaging using a CoWOS platform to provides high integration and better interconnect density. Due to higher storage demand, the NVIDIA increases the number of HBM components on the board and has adopted HBM with up to 8 stacks in the recent generation.

The company’s share price increase of 27% at the end of May to $388.5 following second quarter revenue forecasts, were 50% higher than the stock market had expected. The company attributed the increase to rising demand for AI chips.

Susquehanna’s Chris Rolland called the rise one of the “biggest blowouts” of the last decade in the semiconductor industry. “This is a stark reminder that we are in the middle of a massive AI goldrush . . . and NVIDIA has all the picks and shovels,” he said.

The Santa Clara company’s GPGPUs, using CUDA software, are the preferred hardware for DC AI acceleration. LLMs (large language models), which run applications such as ChatGPT, are trained using GPUs, which are also used for inference. This marks a breakaway from cloud-based AI, which uses CPUs for inference after the model is trained on the GPU. NVIDIA’s revenue results and guidance for its datacenter business are evidence of the impact of LLMs driving demand for GPUs.

NVIDIA’s stock has been consistent throughout 2023, gaining 109% before this latest set of results.  Its market capitalization could reach close to $1 trillion, with AI-related stocks adding a further $100 billion, according to data from Reuters. Rolland continued the goldrush analogy adding that unlike 1849, this one is “built around generative AI, large language models and NVIDIA really is the only game in town”, with the best-positioned stock for now.

How AI will influence the industry

AI has been dominating headlines but one of the key trends in fulfilling its potential is a growing acceleration hardware sector. Yole Intelligence’s Processor Market Monitor, Q1 2023 reported that less than 10% of servers shipped in 2022 had coprocessor acceleration hardware but this number will grow as demand for cloud-based AI acceleration increases.

This will benefit acceleration hardware providers such as NVIDIA, AMD and TSMC. If the percentage of servers using coprocessor acceleration doubles to 20% by 2028, this will result in a GPU TAM in excess of $50 billion.

John Lorenz Senior Technology and Market Analyst, Yole Intelligence, part of Yole Group
The increasing demand for AI chips will have ramifications for all areas of the semiconductor industry, from die and packaging to software and protocols.

In the near future, packaging technology, in particular chiplet packaging, will become more important. For die manufacturers, the increased use of chiplet technology will mean that different die supplier will be able to support one chip and there will not be the requirement for all parts of an SoC to be at the leading-edge (and therefore most expensive) node.

At the same time, several AI chip manufacturers are aiming to deliver 3nm process nodes for datacenter applications.

There will also be implications for IP design houses. Not only will the increasing complexity of SoC design influence development costs but the use of AI is likely to promote innovation and differentiation of design.

The automotive industry is likely to start using AI chips in the near future; MediaTek has already announced that it will develop 3nm automotive devices in cooperation with NVIDIA, using its GPUs and AI software. It is also likely that switch designers will meet the need of servers for multi-chip fast communication for AI applications and develop an effective GPU interconnect.

Other changes are likely to be in the system level design and material supply chain for solution providers and in software development as algorithms continue to grow. There will also be a need for new communication protocols, for example the NVIDIA NV-link to replace PCIe as a direct GPU-to-GPU interconnect. 

Post-Computex 2023 report – What did you miss?

Computex 2023 was the first in-person since 2019. AI was just one of the key technologies at the event which was held at the Taipei Nangang Exhibition Center Computex 2023 (May 30 – June 2).

The international technology event began with keynotes from industry luminaries and forum speakers comprised of representatives from computing (e.g., Supermicro, ASUS, Lenovo) and silicon vendors (e.g., Intel, NXP, STMicroelectronics, NVIDIA and Texas Instruments) as well as Arm and Siemens, all discussing technology trends, from the metaverse to sustainable developments.

Technology themes :

Six key themes were identified for 2023: HPC, AI applications, next-gen connectivity, hyper-reality, innovations & startups, and sustainability.

Ying-Wu Liu Technology & Cost Analyst at Yole SystemPlus, part of Yole Group
Among these topics, AI drew particular attention, in part because of headlines around ChatGPT together with the show’s association with computing technology which is critical to AI adoption. For example, Jensen Huang’s keynote explored how AI can reshape different industries around the world.

Among these topics, AI drew particular attention, in part because of headlines around ChatGPT together with the show’s association with computing technology which is critical to AI adoption. For example, Jensen Huang’s keynote explored how AI can reshape different industries around the world.

NVIDIA has also announced that the Grace Hopper Superchips (GH200) are in full production, using TSMC’s CoWoS (chip on wafer on substrate) process.

Although NVIDIA has set the stage for the booming cloud/datacenter AI trend, other players contributed- with key announcements. Gigabyte, QCT and SuperMicro all highlighted server motherboard and chassis configurations that can be used for AI acceleration.

The thirst for AI even extended to power supply and network switch manufacturers, which used Computex 2023 as a platform to show how their solutions enable the AI servers that are used for LLMs.

The event was also where Arm introduced TCS23, its new platform for mobile computing which will be integrated into MediaTek’s next Dimensity generation. Arm also announced that the NVIDIA Grace SoC uses the Arm Neoverse V2 compute system and that Softbank is using the Grace Hopper superchip to build the world’s first 5G AI datacenter.

Another notable presentation was by Ampere. It introduced its Altra family of 128-core processors, manufacturing using 7nm process technology. The company also announced that it was beginning an edge platform development in cooperation with 7STARLAKE, ADLINK, ASROCK and Hewlett Packard.

NVIDIA was omnipresent but other dominant server processors were also in evidence, including Intel’s Xeon 4 th generation (Sapphire Rapids) and the AMD EPYC Zen 4 (Genoa and Bergamo).

While Nvidia is attracting the most attention today, we expect fierce competition for AI acceleration hardware.  Intel and AMD have new products in the pipe targeting the same applications, and will certainly be aggressive to catch this wave as well.

Stay tuned on

Related products

About the authors

John Lorenz is a Senior Technology and Market Analyst, Computing & Software within the Semiconductor, Memory & Computing division at Yole Intelligence, part of Yole Group. John is engaged in the development of market and technology monitors for the logic segment of advanced semiconductors, with a primary focus on processors.  Prior to joining Yole Intelligence, John held various engineering and strategic planning roles over 15 years at Micron Technology.

John has a Bachelor of Science degree in Mechanical Engineering from the University of Illinois Urbana-Champaign (USA), with a focus on MEMS devices.

Ying-Wu Liu serves as a Technology & Cost Analyst at Yole SystemPlus, part of Yole Group. Ying-Wu ’s core expertise is Integrated Circuits technologies. With solid expertise in physical and electronical analysis of devices and experiences in wafer manufacturing and technical supports with international clients, Ying’s mission is to develop reverse engineering & costing reports. She works closely with different laboratories to set up significant physical & chemical analyses of innovative IC chips. Based on the results, Ying identifies and analyses the overall manufacturing process and all technical choices made by the ICs makers to understand the structure of the device and point out the link between cost and technology. Prior to Yole SystemPlus, Ying worked as Technical Support Manager at KEOLABS, where she had the opportunity to build up her ability to cooperate with clients from different cultures.

Ying holds a master’s degree in Theoretical Physics from the National Tsing Hua University (Taiwan), and a master in Integration, Security and Trust in Embedded systems from the Grenoble INP, ESISAR (France).