Search

Intel launches Sapphire Rapids fourth-gen Xeon CPUs and Ponte Vecchio Max GPU Series

After years of delays, Intel formally launched its fourth-gen Xeon Scalable Sapphire Rapids CPUs, in both regular and HBM-infused Max flavors, and its “Ponte Vecchio” Data Center GPU Max Series today. Intel’s expansive portfolio of 52 new CPUs will face off with AMD’s EPYC Genoa lineup that debuted last year. The company also slipped in a low-key announcement of its last line of Optane Persistent Memory DIMMs.

While AMD’s chips maintain the core count lead with a maximum of 96 cores on a single chip, Intel’s Sapphire Rapids chips bring the company up to a maximum of 60 cores, a 50% improvement over its previous peak of 40 cores with the third-gen Ice Lake Xeons. Intel claims this will lead to a 53% improvement in general compute over its prior-gen chips, but largely avoided making direct comparisons to AMD’s chips during its presentations. However, Intel has provided samples to the press for unrestricted third-party reviews, so it isn’t shying away from the competition.

Sapphire Rapids leans heavily into new acceleration technologies that can either be purchased outright or bought through a new pay-as-you-go model. These new purpose-built accelerator regions of the chip are designed to radically boost performance in several types of work, like compression, encryption, data movement, and data analytics, that typically require discrete accelerators for maximum performance.

Despite having a clear core count lead, AMD doesn’t have similar acceleration features for its Genoa processors. When employing the new accelerators, Intel claims an average 2.9X improvement in performance-per-watt over its own previous-gen models in some workloads. Intel also claims a 10X improvement in AI inference and training, and a 3X improvement in data analytics workloads.

Intel’s Sapphire Rapids, which comes fabbed on the ‘Intel 7’ process, also brings a host of new connectivity technologies, like support for PCIe 5.0, DDR5 memory, and the CXL 1.1 interface (type 1 and 2 devices), giving the company a firmer footing against AMD’s Genoa. We’re hard at work benchmarking the chips for our full review that we will post in the coming days, but in the interim, here’s a brief overview of the new lineup.

Intel 4th-Gen Xeon Scalable Sapphire Rapids Pricing and Specifications

Intel’s Sapphire Rapids product stack spans 52 models carved into ‘performance’ and ‘mainstream’ dual-socket chips for general-purpose models. There’s also specialized models for liquid-cooled, single-socket, networking, cloud, HPC, and storage/HCI systems. As a result, it feels like there’s a specialized chip for nearly every workload, creating a confusing product stack.  

Those chips are then carved into various Max, Platinum, Gold, Silver, and Bronze sub-tiers, each denoting various levels of socket scalability, support for Optane persistent memory, RAS features, SGX enclave capacities, and the like.

The Sapphire Rapids chips also now come with a varying number of enabled accelerator devices onboard. For now, it’s important to know that each chip can have a variable number of accelerator ‘devices’ enabled (listed in the spec sheet above —think of the number of ‘devices’ as akin to accelerator ‘cores’).

You can buy chips that are fully enabled with four devices for all accelerators, or you can opt for less expensive chip models with a lower number of enabled devices. If the chip isn’t fully enabled, you can activate the accelerators later via a new pay-as-you-go mechanism called Intel on Demand. The ‘+’ models have at least one accelerator of each type enabled by default. However, there are two classes of chips with two different allocations of accelerators. We’ll dive into those details, and the different types of accelerators, below.

The new processors all support AVX-512, Deep Leaning Boost (DLBoost), and the new Advanced Matrix Extensions (AMX) instructions, with the latter delivering explosive performance uplift in AI workloads by using a new set of two-dimensional registers called tiles. Intel’s AMX implementation will primarily be used to boost performance in AI training and inference operations.

As before, Intel’s 4th-Gen Xeon Scalable platform supports 1-, 2-, 4-, and 8-socket configurations, whereas AMD’s Genoa only scales to two sockets. AMD leads in PCIe connectivity options, with up to 128 PCIe 5.0 lanes on offer, while Sapphire Rapids peaks at 80 PCIe 5.0 lanes.

Sapphire Rapids also supports up to 1.5TB of DDR5-4800 memory spread across eight channels per socket, while AMD’s Genoa supports up to 6TB of DDR5-4800 memory spread across 12 channels. Intel has spec’d its 2DPC (DIMMs per Channel) configuration at DDR5-4400, whereas AMD has not finished qualifying its 2DPC transfer rates (the company expects to release the 2DPC spec this quarter).

The Sapphire Rapids processors span from eight-core models to 60 cores, with pricing beginning at $415 and peaking at $17,000 for the flagship Xeon Scalable Platinum 8490H. The 8490H has 60 cores and 120 threads, with all four accelerator types fully enabled. The chip also has 112.5 MB of L3 cache and a 350W TDP rating.

The Sapphire Rapids TDP envelopes span from 120W to 350W. The 350W rating is significantly higher than the 280W peak with Intel’s previous-gen Ice Lake Xeon series, but the inexorable push for more performance has the industry at large pushing to higher limits. For instance, AMD’s Genoa tops out at a similar 360W TDP, albeit for a 96-core model, and can even be configured as high as 400W. 

The 8490H is the lone 60-core model, and it is only available with all acceleration engines enabled. Stepping back to the 56-core Platinum 8480+ will cost you $10,710, but that comes with only one of each type of acceleration device active. This processor has a 3.8 GHz boost clock, 350W TDP, and 105MB of L3 cache. 

Read more

Source: https://www.tomshardware.com/

up