Search

AI race: can Nvidia sustain its outperformance in supplying data centers?

Nvidia is putting a spotlight on the semiconductor industry with its rapid revenue growth driven by the artificial intelligence (AI) revolution. The company brought in $30 billion during the second quarter, up by 15% from the first quarter and 122% from a year ago, exciting Wall Street investors who would typically have little interest in a semiconductor company.

Historically a graphics processing company, Nvidia has taken the lead in providing a full hardware and software solution for data centers. By selling full cabinets, the company has expanded its business model from a semiconductor hardware to AI servers provider and consequently has pulled ahead of its competitors, such as Intel and AMD, in the AI race.

Having leapfrogged from a $2-3 billion per quarter company five years ago to now reach $30 billion, Nvidia now has investors interested in chip packaging and supply chains, as well as speculating how much the company can deliver.

What does Nvidia’s latest financial statement mean? What effect will this have on the market? How are its competitors likely to respond? Take a moment to read Yole Group’s analysis written by Emilie Jolivet, Business Line Director of the More Moore activities, including processing and computing at Yole Group.

Nvidia advances data center GPU technology

Nvidia reassured investors and the market that its new Blackwell B100 AI GPU and GB100 CPU and GPU, which will more than double the performance of the H200, will start shipping out during the fourth quarter. The company had recently announced that the release would be delayed as it had experienced some problems with the interconnection between the GPU and memory due to the transition it has made in its packaging solutions.

Using TSMC’s silicon interposer-based solution (CoWoS-S), Nvidia released the Hopper H200 Tensor Core GPU as a follow up to the H100, marketed as the world’s most powerful AI GPU chip. Fabricated on TSMC’s N4 node, H200 is the first product to use HBM3e technology. It has six HBM3e stacks of 24GB, with a total memory capacity of 141GB. The H200 uses an Si interposer for its package to interconnect the GPU with six HBMs using CoWoS-S technology.

The B100 is speculated to use TSMC’s 3nm process node and will be the first Nvidia product to use a chiplet design that combines two GPU chips and eight HBM3e memory – most likely from SK Hynix – to be packaged in CoWoS-L technology. CoWoS-L will add local silicon interconnects (LSI) to the interposer to improve the chip design and packaging flexibility to support high-frequency and broadband memory stacks. Instead of utilizing a large piece of silicon, CoWoS-L incorporates several small silicon bridges that assure the links between dies with a tight bump pitch that can reach 25 µm (currently, mass produced silicon interposers allow pitches higher than 35 µm). Those silicon bridges – LSIs – are encapsulated in a mold compound, creating a mold interposer with RDL layers on top and bottom. This approach comes as a more cost-efficient solution than silicon interposers, as the amount of silicon area necessary is greatly diminished. It also allows extremely high interconnection density in a localized area.

Blackwell cabinets will contain up to 80 GPUs and will be priced at more than $3 million.

Nvidia, being a major consumer of advanced products, drives the semiconductor supply chain along with it, putting pressure on the HBM market that grew from $2 billion in 2022 to more than $16 billion in 2024. And this will continue because beyond 2025, Nvidia’s future AI GPU roadmap includes its R100 and X100 models, which could incorporate the next gen of HBM (HBM4 and HBM4E).

Will Nvidia continue to profit from AI or is it a bubble poised to burst?

One of the persistent questions among industry observers has been how much investment in data centers is sustainable.

The level of consumption over the coming years will have an impact in the future on the technologies that are adopted and the companies that develop them.

Emilie Jolivet Business Line Director of the More Moore activities at Yole Group
At Yole Group, we expect Amazon, Google and Meta to maintain the pace of high levels of investment for the next couple of years, as they compete to offer more advanced cloud services and AI applications to businesses and consumers.

Google, Meta and Microsoft are among the companies developing proprietary solutions, but they have limited use so far. Google has developed its own GPU-like accelerators, but it is still equipping its data centers with Nvidia’s products.

This suggests a bullish outlook for Nvidia for the next few quarters – particularly as it appears to have fixed the Blackwell delays. It also bodes well for players in the memory segment, including SK Hynix, Samsung and Micron, as Blackwell uses three times the amount of bandwidth as its predecessor. These companies have seen their revenues multiplied fivefold as Nvidia and its competitors need increasing HBM volumes to keep pace with customer demand.

It also remains to be seen how the market will develop in Asia – how China will equip its own data centers and how fast the domestic industry will develop. With restrictions on their access technology and equipment imports from the US, Europe and Japan, Chinese companies will need to develop their own leading-edge cards from their existing supply chains. There is currently one foundry in China with 7nm process capabilities.

The way companies are packaging the chips together and interconnecting them is essential and relies on very advanced packaging techniques that so far only TSMC can deliver. In fact, data center AI is driving the advanced packaging market, which is estimated to grow at a 13% CAGR in the next five years, reaching more than $80 billion in 2029. Within the advanced packaging technologies, 2.5D and 3D are growing at the fastest rate.

The AI race is strengthening the trade relationship between the US and Taiwan, as the Nvidia supply chain relies heavily on TSMC as well as substrates from Taiwan. This could have geopolitical implications as tensions between China and the US and Taiwan continue to affect global supply chains.

Yole Group has well-established analysis on Nvidia as well as the broader semiconductor industry. Keep following our coverage to monitor Nvidia’s performance over the coming quarters.

About the author

Emilie Jolivet is Business Line Director of the More Moore activities at Yole Group.

Based on her valuable experience in the semiconductor industry, Emilie manages the expansion of the technical and market expertise of the memory, computing & software team.

In addition, Emilie’s mission focusses on the management of business relationships with semiconductor leaders and the development of market research and strategy consulting activities inside Yole Group.

Prior to Yole Group, after an internship in failure analysis at Freescale (France), she was an R&D engineer for seven years in the photovoltaic business where she co-authored several scientific articles. Then, Emilie worked at EV Group (Austria) as a business development manager in 3D & Advanced Packaging.

Emilie Jolivet holds a Master’s degree in Applied Physics specializing in Microelectronics from INSA (Toulouse, France).  She also graduated with an MBA from IAE Lyon.

This article has been developed in collaboration with the semiconductor packaging team at Yole Group.

Related Reports

Related Market Monitor

up