Technology, Process and Cost
Nvidia H100 Tensor Core GPU
By Yole SystemPlus —
NVIDIA H100 with combined technology innovations on GPU front end process and advanced heterogenous packaging
SPR23722
Overview
- Executive Summary
- Product Specification
- Reverse Costing Methodology
- Glossary
Company Profile
- NVIDIA Financials & NVIDIA Products
- Hopper H100
- TSMC CoWoS
Physical Analysis
- Summary of the physical analysis
- Module Disassembly
- Package Analysis (Overview, X-Ray, Cross Section)
- Die Analysis (GPU, HBM, Interposer, filler die)
Physical Comparison
- NVIDIA P100, V100, A100, H100
Manufacturing Process Flow Analysis
- Global Overview
- Die Front-End & Wafer Fabrication Unit
- Packaging process
Cost Analysis
- Cost Summary
- Yields Explanation & Hypotheses
- GPU Die Front-End Wafer Cost
- HBM Memory & TSV (DRAM & Logic Die Cost)
- Interposer Wafer
- Packaging Cost ( Chip on Wafer)
- Component Cost
Selling Price
- Definitions of Price
- Estimated Selling Price
Feedbacks
Related Products
About Yole Group
Introduced in H2 2022, the NVIDIA H100 Tensor Core GPU is the most complex and advanced accelerator designed to power data centers for AI targeting high-performance computing (HPC) applications. NVIDIA H100 Tensor Core technology is equiped with an advanced GPU chip and high bandwidth memory.
To support applications that require the fastest computational speed and highest data throughput, its architecture combines integration of advanced transistor process technology node and high chip integration within the package. The GPU die is manufactured using TSMC’s N4 technology, allowing up to 80 billion transistors to be integrated on a single processor chip, up to 228KB of SRAM memory can be configured on the chip. The CoWos architecture allows NVIDIA to integrate the GPU and several HBM memory stacks on a silicon interposer hence achieving improved interconnect density, higher performance, reduced power consumption, and smaller form factors. TSMC’s CoWoS technology has continued to grow over the years, to meet explosion demand for AI technology.
Compared to its predecessor NVIDIA H100 approximately up to six times compute performance improvement over the A100. The H100 GPU has more than 40% increase in transistor count compared to the A100 GPU chip yet the silicon GPU die size has been reduced.
This full reverse costing study has been conducted to provide insight on technology data, manufacturing cost and selling price of the NVIDIA H100 component. The report includes a complete physical analysis of the 2.5D & 3D package, this analysis includes 3D X-Ray images, package cross sections on different zones. Several Optical and high-resolution SEM images reveal the GPU die integration in the package including the interposer die and the HBM2e DRAM Memory. Provided is the front-end and back-end manufacturing process of the silicon dies, CoWos process and the final assembly. An estimation of the manufacturing cost and detailed process steps of the packaging are detailed in the report. Finally, the report includes a comparison that highlights the similarities and differences between the Hopper H100, Ampere A100 and NVIDIA’s Tesla P100 and V100.
- NVIDIA
- TSMC
- SK hynix
Key Features
- Detailed photos
- Precise measurements
- Materials analysis
- Manufacturing process flow
- Supply chain evaluation
- Manufacturing cost analysis
- Comparison with NVIDIA P100,V100, A100
Product objectives
-
Technical and cost analysis of NVIDIA H100 with deep focus on advanced packaging