The AI race explodes as HPE deploys AMDs Helios racks, crushing limits with Venice CPUs and insane GPU density
Date:
Thu, 04 Dec 2025 21:00:00 +0000
Description:
HPE adopts AMD Helios AI racks with 72 GPUs and Venice CPUs, aiming for exascale workloads while testing Ethernet scale-up.
FULL STORY ======================================================================HPE
will ship 72-GPU racks with next-generation AMD Instinct accelerators
globally Venice CPUs paired with GPUs target exascale-level AI performance
per rack Helios relies on liquid cooling and double-wide chassis for thermal management
HPE has announced plans to integrate AMDs Helios rack-scale AI architecture into its product lineup starting in 2026.
The collaboration gives Helios its first major OEM partner and positions HPE to ship full 72-GPU AI racks built around AMDs next-generation Instinct
MI455X accelerators.
These racks will pair with EPYC Venice CPUs and use an Ethernet-based
scale-up fabric developed with Broadcom. Rack layout and performance targets
The move creates a clear commercial route for Helios and puts the
architecture in direct competition with Nvidias rack-scale platforms already in service.
The Helios reference design relies on Metas Open Rack Wide standard.
It uses a double-wide liquid-cooled chassis to house the MI450-series GPUs, Venice CPUs, and Pensando networking hardware.
AMD targets up to 2.9 exaFLOPS of FP4 compute per rack with the MI455X generation, along with 31TB of HBM4 memory.
The system presents every GPU as part of a single pod, which allows workloads to span all accelerators without local bottlenecks.
A purpose-built HPE Juniper switch supporting Ultra Accelerator Link over Ethernet forms the high-bandwidth GPU interconnect.
It offers an alternative to Nvidias NVLink-centric approach.
The High-Performance Computing Center Stuttgart has selected HPEs Cray GX5000 platform for its next flagship system, named Herder.
Herder will use MI430X GPUs and Venice CPUs across direct liquid-cooled
blades and will replace the current Hunter system in 2027.
HPE stated that the GX5000 racks waste heat will warm campus buildings, which shows environmental considerations in addition to performance goals.
AMD and HPE plan to make Helios-based systems globally available next year, expanding access to rack-scale AI hardware for research institutions and enterprises.
Helios uses an Ethernet fabric to connect GPUs and CPUs , which contrasts
with Nvidias NVLink approach.
The use of Ultra Accelerator Link over Ethernet and Ultra Ethernet Consortium-aligned hardware supports scale-out designs within an open standards framework.
Although this approach allows theoretically comparable GPU counts to other high-end AI racks, performance under sustained multi-node workloads remains untested.
However, reliance on a single Ethernet layer could introduce latency or bandwidth constraints in real applications.
That said, these specifications do not predict real-world performance, which will depend on effective cooling, network traffic handling, and software optimization.
Via Tom's Hardware
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the
Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
======================================================================
Link to news story:
https://www.techradar.com/pro/the-ai-race-explodes-as-hpe-deploys-amds-helios- racks-crushing-limits-with-venice-cpus-and-insane-gpu-density
--- Mystic BBS v1.12 A49 (Linux/64)
* Origin: tqwNet Technology News (1337:1/100)