• Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
Friday, April 10, 2026
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io
No Result
View All Result
Converge Digest
No Result
View All Result

Home » Hot Interconnects: GigaIO Showcases SuperNODE Scale-Up Fabric for AI

Hot Interconnects: GigaIO Showcases SuperNODE Scale-Up Fabric for AI

August 22, 2025
in AI Infrastructure
A A

At the IEEE Hot Interconnects online conference, Alan Benjamin, CEO of GigaIO, delivered an online presentation highlighting the company’s SuperNODE architecture as a more efficient alternative to conventional scale-out GPU clusters for AI inference workloads. Built on the company’s FabreX PCIe memory fabric, SuperNODE connects dozens of accelerators into a single node server, eliminating the inter-server overhead and latency challenges of Ethernet- and InfiniBand-based clusters.

The SuperNODE is accelerator- and form-factor-agnostic, supporting GPUs from NVIDIA and AMD, FPGAs, ASICs, and custom inference processors such as d-Matrix and Tenstorrent, across OAM, SXM, and PCIe board formats. With end-to-end latencies of 330 nanoseconds, the system enables higher utilization and lower total cost of ownership. GigaIO also highlighted Gryf, a carry-on-sized portable AI supercomputer, bringing datacenter-class compute to the edge.

Benchmark results presented at the conference showed that SuperNODE achieved 83x faster time-to-first-token, 48% higher token throughput, and 51% more requests per second compared with RoCE Ethernet-based scale-out clusters of the same 32 GPUs, processors, memory, and storage. Token-per-watt improved by 80%, and token-per-dollar improved by 50%.

  • Connects dozens of heterogeneous accelerators into a single server
  • PCIe-based FabreX fabric with NVLink and Infinity Fabric integration
  • 330ns latency, lower than Ethernet and InfiniBand alternatives
  • Supports OAM, SXM, PCIe form factors
  • Benchmarks: 83x faster token response, 48% more tokens/sec, 51% more requests/sec
  • Tokenomics: +80% tokens/watt, +50% tokens/dollar
  • Companion product “Gryf” provides portable AI inference in a carry-on form factor

“Our SuperNODE architecture is designed to deliver true scale-up performance for AI inference, with the lowest latency and highest efficiency in the industry,” said Alan Benjamin, CEO of GigaIO.

🌐 Analysis:

Founded in 2012 and based in Carlsbad, California, GigaIO has built its reputation on the FabreX PCIe-based memory fabric, a disaggregated infrastructure solution that allows pooling and dynamic allocation of accelerators, storage, and networking. The company has raised funding from investors including SK Hynix and has targeted HPC, AI, and composable infrastructure markets where accelerator heterogeneity is increasingly critical.

On the technology side, PCIe remains central to GigaIO’s strategy. PCIe Gen4 is widely deployed, with Gen5 beginning to ship in new servers, doubling bandwidth to 64 GT/s per lane. Gen6, expected in 2026, will double that again to 128 GT/s, incorporating PAM4 signaling to sustain performance growth. Each PCIe generation enables larger and more tightly coupled fabrics, a trend that supports GigaIO’s vision of rack-scale single-server architectures. As competitors like NVIDIA push proprietary interconnects (NVLink, NVSwitch) and AMD advances Infinity Fabric, GigaIO is betting on PCIe’s ubiquity and open ecosystem to gain traction among customers seeking flexibility and cost efficiency.

🌐 We’re tracking the latest developments in AI infrastructure and accelerator interconnects. Follow our ongoing coverage at: https://convergedigest.com/category/ai-infrastructure/

Tags: GigaIOHOTI
ShareTweetShare
Previous Post

Aligned Secures $1B Blackstone Financing to Accelerate 5 GW Expansion

Next Post

Hot Interconnects: Avicena’s High-Yield MicroLED Arrays for Scale-Up AI Clusters

Jim Carroll

Jim Carroll

Editor and Publisher, Converge! Network Digest, Optical Networks Daily - Covering the full stack of network convergence from Silicon Valley

Related Posts

Hot Interconnects: UALink for Rack-Scale AI Interconnects
All

Hot Interconnects: UALink for Rack-Scale AI Interconnects

August 25, 2025
Hot Interconnects: Avicena’s MicroLED Links Promise Sub-1 pJ/bit Energy Efficiency
AI Infrastructure

Hot Interconnects: Avicena’s MicroLED Links Promise Sub-1 pJ/bit Energy Efficiency

August 25, 2025
Hot Interconnects: Celestial AI’s Photonic Fabric
AI Infrastructure

Hot Interconnects: Celestial AI’s Photonic Fabric

August 25, 2025
Hot Interconnects: Avicena’s High-Yield MicroLED Arrays for Scale-Up AI Clusters
AI Infrastructure

Hot Interconnects: Avicena’s High-Yield MicroLED Arrays for Scale-Up AI Clusters

August 22, 2025
Hot Interconnects: UCIe Brings On-Package Memory to the Forefront
All

Hot Interconnects: UCIe Brings On-Package Memory to the Forefront

August 22, 2025
Hot Interconnects: Microsoft Maps Out $100B AI Networking Fabric
AI Infrastructure

Hot Interconnects: Microsoft Maps Out $100B AI Networking Fabric

August 21, 2025
Next Post
Hot Interconnects: Avicena’s High-Yield MicroLED Arrays for Scale-Up AI Clusters

Hot Interconnects: Avicena’s High-Yield MicroLED Arrays for Scale-Up AI Clusters

Categories

  • 5G / 6G / Wi-Fi
  • AI Infrastructure
  • All
  • Automotive Networking
  • Blueprints
  • Clouds and Carriers
  • Data Centers
  • Enterprise
  • Explainer
  • Feature
  • Financials
  • Last Mile / Middle Mile
  • Legal / Regulatory
  • Optical
  • Quantum
  • Research
  • Security
  • Semiconductors
  • Space
  • Start-ups
  • Subsea
  • Sustainability
  • Video
  • Webinars

Archives

Tags

5G All AT&T Australia AWS Blueprint columns BroadbandWireless Broadcom China Ciena Cisco Data Centers Dell'Oro Ericsson FCC Financial Financials Huawei Infinera Intel Japan Juniper Last Mile Last Mille LTE Mergers and Acquisitions Mobile NFV Nokia Optical Packet Systems PacketVoice People Regulatory Satellite SDN Service Providers Silicon Silicon Valley StandardsWatch Storage TTP UK Verizon Wi-Fi
Converge Digest

A private dossier for networking and telecoms

Follow Us

  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

No Result
View All Result
  • Home
  • Events Calendar
  • Blueprint Guidelines
  • Privacy Policy
  • Subscribe to Daily Newsletter
  • NextGenInfra.io

© 2025 Converge Digest - A private dossier for networking and telecoms.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.
Go to mobile version