• Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
Saturday, April 11, 2026
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
No Result
View All Result

Home » Oracle Taps AMD MI355X GPUs for Zettascale AI Supercluster 

Oracle Taps AMD MI355X GPUs for Zettascale AI Supercluster 

June 15, 2025
in Data Centers
A A

Oracle and AMD have announced a major expansion of their partnership to deliver high-performance AI infrastructure, with Oracle Cloud Infrastructure (OCI) set to become one of the first hyperscalers to deploy AMD’s new Instinct MI355X GPUs. The forthcoming OCI supercluster will scale up to 131,072 GPUs, targeting massive AI workloads including large language model training, generative inference, and next-gen agentic applications.

The new MI355X-powered compute shapes promise a 2.8X performance boost over previous AMD generations, enabled by 288GB of HBM3 and up to 8TB/s memory bandwidth. Support for the FP4 standard will allow efficient inference of 4-bit quantized models, and dense liquid-cooled racks with 64 GPUs per rack aim to optimize thermal efficiency and performance density for hyperscale training. Oracle’s zettascale cluster will feature AMD Turin CPUs as powerful head nodes and AMD Pollara network interface cards for ultra-low latency RoCE networking.

The collaboration also builds on AMD’s ROCm open software stack to ensure open-source compatibility and avoid vendor lock-in. By deploying AMD Pollara NICs on its backend, Oracle becomes the first cloud provider to implement Ultra Ethernet Consortium standards for AI networking at scale. “The latest generation of AMD Instinct GPUs and Pollara NICs on OCI will help support new use cases in inference, fine-tuning, and training,” said Forrest Norrod, EVP at AMD.

  • OCI to deploy up to 131,072 AMD Instinct MI355X GPUs in zettascale supercluster
  • MI355X offers 2.8X throughput boost and 50% more memory than prior generation
  • 288GB HBM3 per GPU and FP4 support for efficient LLM inference
  • Liquid-cooled racks at 125kW each, 64 GPUs per rack at 1,400W per GPU
  • AMD Pollara NICs bring programmable congestion control and UEC standards to AI networking

“We are dedicated to providing the broadest AI infrastructure offerings,” said Mahesh Thiagarajan, EVP at Oracle Cloud Infrastructure. “AMD Instinct GPUs, paired with OCI’s performance, advanced networking, flexibility, and scale, will help our customers meet their inference and training needs for AI workloads and new agentic applications.”

Tags: AMDOracle
ShareTweetShare
Previous Post

Deutsche Telekom and NVIDIA Partner on European Industrial AI

Next Post

AMD Lays Out Full-Stack Vision for AI Infrastructure

Jim Carroll

Jim Carroll

Editor and Publisher, Converge! Network Digest, Optical Networks Daily - Covering the full stack of network convergence from Silicon Valley

Related Posts

AMD’s Compute + Pensando Network Architecture Powers Zyphra’s AI 
AI Infrastructure

AMD’s Compute + Pensando Network Architecture Powers Zyphra’s AI 

November 25, 2025
AMD, Cisco and HUMAIN Form Joint Venture to Build 1 GW of AI Infrastructure by 2030
AI Infrastructure

AMD, Cisco and HUMAIN Form Joint Venture to Build 1 GW of AI Infrastructure by 2030

November 19, 2025
AMD says PC sales weak, data center sales on target
Financials

AMD’s Data Center and AI Surge Power Record Quarter,

November 4, 2025
Stargate Michigan: Multi-Billion-Dollar AI Campus to Break Ground in 2026
AI Infrastructure

Stargate Michigan: Multi-Billion-Dollar AI Campus to Break Ground in 2026

October 30, 2025
FS Launches 51.2 Tbps AI Ethernet Switches with Broadcom Tomahawk 5
Data Centers

AMD Finalizes Sale of ZT Systems Manufacturing to Sanmina

October 27, 2025
OCP25: AMD Unveils “Helios” Open AI Rack Built on Meta’s Design
Data Centers

OCP25: AMD Unveils “Helios” Open AI Rack Built on Meta’s Design

October 19, 2025
Next Post
AMD Lays Out Full-Stack Vision for AI Infrastructure

AMD Lays Out Full-Stack Vision for AI Infrastructure

Categories

  • 5G / 6G / Wi-Fi
  • AI Infrastructure
  • All
  • Automotive Networking
  • Blueprints
  • Clouds and Carriers
  • Data Centers
  • Enterprise
  • Explainer
  • Feature
  • Financials
  • Last Mile / Middle Mile
  • Legal / Regulatory
  • Optical
  • Quantum
  • Research
  • Security
  • Semiconductors
  • Space
  • Start-ups
  • Subsea
  • Sustainability
  • Video
  • Webinars

Archives

Tags

5G All AT&T Australia AWS Blueprint columns BroadbandWireless Broadcom China Ciena Cisco Data Centers Dell'Oro Ericsson FCC Financial Financials Huawei Infinera Intel Japan Juniper Last Mile Last Mille LTE Mergers and Acquisitions Mobile NFV Nokia Optical Packet Systems PacketVoice People Regulatory Satellite SDN Service Providers Silicon Silicon Valley StandardsWatch Storage TTP UK Verizon Wi-Fi
Converge Digest

A private dossier for networking and telecoms

Follow Us

  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

No Result
View All Result
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.
Go to mobile version