• Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
Monday, April 20, 2026
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
No Result
View All Result

Home » Aurora breaks exascale barrier by linking 63,744 GPUs with Cray Slingshot Interconnects

Aurora breaks exascale barrier by linking 63,744 GPUs with Cray Slingshot Interconnects

May 13, 2024
in Data Centers
A A

The Aurora supercomputer at the U.S. Department of Energy’s Argonne National Laboratory has officially surpassed the exascale threshold, achieving over a quintillion calculations per second, as announced today at the ISC High Performance 2024 conference in Hamburg, Germany.

Built by Intel and Hewlett Packard Enterprise (HPE), Aurora features a groundbreaking architecture, inclusing 63,744 graphics processing units (GPUs), making it  the world’s largest GPU-powered system with more interconnect endpoints than any other system to date.

Aurora Architecture Highlights

Processing Units

  • Intel CPUs: Aurora is equipped with next-generation Intel Xeon Scalable processors.
  • Intel GPUs: The system includes Intel’s Ponte Vecchio GPUs, which are designed for high-performance computing (HPC) and artificial intelligence (AI) workloads.

Performance

  • Exascale Performance: Aurora is expected to deliver performance exceeding one exaFLOP (10^18 floating-point operations per second). This places it among the first exascale systems in the world, capable of performing a quintillion calculations per second.

Memory

  • High-Bandwidth Memory: Aurora incorporates high-bandwidth memory (HBM) for both its CPUs and GPUs, which enhances data transfer rates and overall computational efficiency.
  • Unified Memory Architecture: The system uses a unified memory architecture that allows for seamless data sharing between CPUs and GPUs, reducing latency and improving performance.

Interconnect

  • Cray Slingshot: Aurora uses the Cray Slingshot high-speed interconnect, which offers advanced network capabilities, low latency, and high bandwidth. The Cray Slingshot interconnect is based on Ethernet technology, rather than Infiniband. 
  • Per-Link Throughput: Each link in the Slingshot network provides up to 200 gigabits per second (Gbps) of bandwidth. This high per-link throughput ensures rapid data transfer rates, crucial for the vast data sets and intensive computations typical in HPC workloads.
  • Network Scalability: Slingshot’s architecture allows for scaling up to very large node counts, providing high aggregate bandwidth that can support thousands of nodes in an exascale system.
  • Adaptive Routing: Dynamic selection of optimal paths to avoid congestion and improve efficiency.
  • Quality of Service (QoS): Multiple QoS levels to prioritize critical traffic.
  • Scalability: Supports large-scale deployments with thousands of nodes, making it suitable for exascale systems.

Storage

Lustre File System: Aurora is expected to use the Lustre parallel file system, providing fast and scalable storage solutions that can handle the immense data throughput generated by exascale computing workloads.

The installation team, comprising staff from Argonne, Intel, and HPE, is focused on system validation, verification, and scaling up. They are addressing various hardware and software issues as the system approaches full-scale operations.

“Aurora is fundamentally transforming how we do science for our country,” Argonne Laboratory Director Paul Kearns said. ​“It will accelerate scientific discovery by combining high performance computing and AI to fight climate change, develop life-saving medical treatments, create new materials, understand the universe and so much more.”

https://www.anl.gov/article/argonnes-aurora-supercomputer-breaks-exascale-barrier
Source: Argonne National Lab
ShareTweetShare
Previous Post

East Africa hit by subsea cable cuts

Next Post

Arelion boosts its U.S. Gulf Coast network

Jim Carroll

Jim Carroll

Editor and Publisher, Converge! Network Digest, Optical Networks Daily - Covering the full stack of network convergence from Silicon Valley

Related Posts

Cisco, G42, and AMD to Build AI Infrastructure in the UAE
AI Infrastructure

DigitalBridge Teams with KT for AI Data Centers in Korea

November 26, 2025
BerryComm Expands Central Indiana Fiber with Nokia
5G / 6G / Wi-Fi

Telefónica Germany Awards Nokia a 5-Year RAN Modernization Deal

November 26, 2025
AMD’s Compute + Pensando Network Architecture Powers Zyphra’s AI 
AI Infrastructure

AMD’s Compute + Pensando Network Architecture Powers Zyphra’s AI 

November 25, 2025
Bleu, the “Cloud de Confiance” from Capgemini and Orange
Clouds and Carriers

Orange Business Begins Migration of 70% of IT Infrastructure to Bleu Cloud

November 25, 2025
Dell’s server and networking sales rise 16% yoy
Financials

Dell Raises FY26 AI Infrastructure Outlook as AI Server Shipments Surge 150%

November 25, 2025
GlobalFoundries acquires Tagore Technology’s GaN IP
Optical

GlobalFoundries Acquires InfiniLink for Silicon-Photonics Expertise

November 25, 2025
Next Post
AWS Region opens in Jakarta

Adam Selipsky steps down at AWS

Categories

  • 5G / 6G / Wi-Fi
  • AI Infrastructure
  • All
  • Automotive Networking
  • Blueprints
  • Clouds and Carriers
  • Data Centers
  • Enterprise
  • Explainer
  • Feature
  • Financials
  • Last Mile / Middle Mile
  • Legal / Regulatory
  • Optical
  • Quantum
  • Research
  • Security
  • Semiconductors
  • Space
  • Start-ups
  • Subsea
  • Sustainability
  • Video
  • Webinars

Archives

Tags

5G All AT&T Australia AWS Blueprint columns BroadbandWireless Broadcom China Ciena Cisco Data Centers Dell'Oro Ericsson FCC Financial Financials Huawei Infinera Intel Japan Juniper Last Mile Last Mille LTE Mergers and Acquisitions Mobile NFV Nokia Optical Packet Systems PacketVoice People Regulatory Satellite SDN Service Providers Silicon Silicon Valley StandardsWatch Storage TTP UK Verizon Wi-Fi
Converge Digest

A private dossier for networking and telecoms

Follow Us

  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

No Result
View All Result
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.
Go to mobile version