WhiteFiber has deployed DriveNets’ Network Cloud-AI solution as the core networking infrastructure for its new GPU-as-a-Service (GPUaaS) data center in Iceland. The facility supports WhiteFiber’s expanding high-performance compute offerings, delivering AI workloads over a low-latency, Ethernet-based fabric optimized for GPU interconnect and storage.
The decision to use DriveNets’ Network Cloud-AI allows WhiteFiber to scale its infrastructure flexibly while maintaining high throughput and reliability. The platform enables both GPU-to-GPU and storage-to-GPU communications across the data center, and outperformed other Ethernet solutions in NCCL bus bandwidth tests. With rapid deployment capabilities and support for multi-tenancy, WhiteFiber cited improved job completion times and more efficient GPU utilization.
DriveNets also introduced new features aimed at NeoCloud providers like WhiteFiber, including dynamic multi-tenant resource isolation and the ability to interconnect GPU clusters across multiple data centers up to 80km apart with lossless connectivity. This positions DriveNets as a strong alternative to InfiniBand in hyperscale and enterprise AI environments.
- WhiteFiber’s new AI data center is located in Iceland and optimized for GPUaaS workloads
- DriveNets Network Cloud-AI replaces InfiniBand with high-performance Ethernet fabric
- Deployment supports both GPU-to-GPU and storage-to-GPU communication
- New DriveNets features include multi-site cluster support up to 80km and improved tenant traffic isolation
- Deployment reinforces the trend toward Ethernet-based AI infrastructure
“We selected DriveNets Network Cloud-AI since it has been proven to deliver the highest Ethernet-based AI connectivity in enterprise and Hyperscaler environments,” said Tom Sanfillippo, CTO of WhiteFiber. “We were able to deploy Network Cloud-AI with very short lead time and exceptionally fast installation time, getting our AI data center up and running quickly.”







