• Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
Friday, April 10, 2026
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
No Result
View All Result

Home » IBM Unveils Telum II Processor and Spyre Accelerator

IBM Unveils Telum II Processor and Spyre Accelerator

August 27, 2024
in Semiconductors
A A

IBM disclosed key architectural advancements in its forthcoming IBM Telum II Processor and IBM Spyre Accelerator at the Hot Chips 2024 conference. These innovations are set to enhance the processing capabilities of the next-generation IBM Z mainframe systems, particularly in supporting AI models, including large language models (LLMs) and generative AI. The new processor and accelerator aim to address the growing need for scalable, energy-efficient solutions as enterprises increasingly integrate AI into production environments.

The Telum II Processor features significant upgrades, including increased cache, memory capacity, and an integrated AI accelerator core. Complementing this, the IBM Spyre Accelerator, designed to work alongside the Telum II, offers scalable AI compute power, optimizing performance for complex AI models. Both chips are built on Samsung Foundry’s 5nm process, ensuring high performance and power efficiency. IBM’s continued focus on AI and advanced processing aims to provide enterprises with robust tools to manage and leverage AI-driven workloads at scale.

• Telum II Processor:

• Cores and Frequency: Eight high-performance cores running at 5.5GHz.

• Cache: 36MB L2 cache per core with a total of 360MB, a 40% increase in on-chip cache capacity.

• Virtual L4 Cache: 2.88GB per processor drawer, a 40% increase over the previous generation.

• Integrated AI Accelerator Core: Allows for low-latency, high-throughput in-transaction AI inferencing, quadrupling compute capacity per chip.

• Data Processing Unit (DPU): Accelerates IO protocols for networking and storage, with a 50% increase in IO density.

• IBM Spyre Accelerator:

• Memory: Supports up to 1TB of memory, scalable across eight cards in an IO drawer.

• Compute Cores: Each chip features 32 compute cores supporting int4, int8, fp8, and fp16 datatypes.

• Power Efficiency: Designed to consume no more than 75W per card.

• AI Model Support: Optimized for low-latency, high-throughput AI applications, designed for complex AI models and generative AI use cases.

• Manufacturing and Availability:

• Fabrication: Both chips are manufactured by Samsung Foundry on a 5nm process.

• Launch Timeline: Expected availability in 2025 with the next-generation IBM Z and IBM LinuxONE platforms.

“Our robust, multi-generation roadmap positions us to remain ahead of the curve on technology trends, including escalating demands of AI,” said Tina Tarquinio, VP, Product Management, IBM Z and LinuxONE.

IBM Telum Processor features interconnect to link up to 32 chips
Tags: Hot ChipsIBM
ShareTweetShare
Previous Post

Nokia Selected by Claro Argentina for Nationwide 5G Rollout

Next Post

FuriosaAI Reveals High-Efficiency RNGD Chip for AI Inference

Jim Carroll

Jim Carroll

Editor and Publisher, Converge! Network Digest, Optical Networks Daily - Covering the full stack of network convergence from Silicon Valley

Related Posts

IBM and Cisco Aim for Networked, Fault-Tolerant Quantum by Early 2030s
Quantum

IBM and Cisco Aim for Networked, Fault-Tolerant Quantum by Early 2030s

November 20, 2025
IBM Doubles Quantum R&D Speed With 300 mm Fab
Quantum

IBM Doubles Quantum R&D Speed With 300 mm Fab

November 12, 2025
Airtel and IBM Partner to Expand AI-Ready Cloud Infrastructure
All

Airtel and IBM Partner to Expand AI-Ready Cloud Infrastructure

October 23, 2025
IBM Launches Spyre Accelerator to Power Generative and AgenticAI
Data Centers

IBM Launches Spyre Accelerator to Power Generative and AgenticAI

October 8, 2025
Video: AI Automation for Real-Time, Multi-Vendor Networks
Video

#AINetworking25: What’s Next for AI-Driven Networking – Director’s Cut

October 6, 2025
Hot Chips 2025: Celestial AI CTO Details In-Die Optical I/O
All

Hot Chips 2025: Celestial AI CTO Details In-Die Optical I/O

August 29, 2025
Next Post
FuriosaAI Reveals High-Efficiency RNGD Chip for AI Inference

FuriosaAI Reveals High-Efficiency RNGD Chip for AI Inference

Categories

  • 5G / 6G / Wi-Fi
  • AI Infrastructure
  • All
  • Automotive Networking
  • Blueprints
  • Clouds and Carriers
  • Data Centers
  • Enterprise
  • Explainer
  • Feature
  • Financials
  • Last Mile / Middle Mile
  • Legal / Regulatory
  • Optical
  • Quantum
  • Research
  • Security
  • Semiconductors
  • Space
  • Start-ups
  • Subsea
  • Sustainability
  • Video
  • Webinars

Archives

Tags

5G All AT&T Australia AWS Blueprint columns BroadbandWireless Broadcom China Ciena Cisco Data Centers Dell'Oro Ericsson FCC Financial Financials Huawei Infinera Intel Japan Juniper Last Mile Last Mille LTE Mergers and Acquisitions Mobile NFV Nokia Optical Packet Systems PacketVoice People Regulatory Satellite SDN Service Providers Silicon Silicon Valley StandardsWatch Storage TTP UK Verizon Wi-Fi
Converge Digest

A private dossier for networking and telecoms

Follow Us

  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

No Result
View All Result
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.
Go to mobile version