AMD has introduced the Pensando Pollara 400, a 400 Gbps RDMA Ethernet Network Interface Card (NIC) designed to meet the demands of modern AI and machine learning (ML) workloads. The Pollara 400 optimizes GPU-to-GPU communication in AI clusters, providing low-latency, high-throughput data transfers crucial for handling the communication patterns of advanced AI models. By addressing these needs, the Pollara 400 enables organizations to maintain their existing Ethernet infrastructure while supporting the high-performance networking required by AI workloads.
Key capabilities of the Pollara 400 include P4 programmability, which allows users to customize network behaviors and adapt to future AI requirements. Additionally, its multipathing technology intelligently distributes traffic across multiple paths, reducing congestion and enhancing throughput. The NIC’s in-order message delivery and selective retransmission features ensure fast recovery from packet loss, minimizing downtime and improving overall efficiency.
• 400 Gbps RDMA Ethernet NIC for AI workloads
• P4 programmability for future-proofing network behavior
• Multipathing and adaptive packet spraying for reduced congestion
• In-order message delivery for fast, reliable communication
• Selective retransmission for optimized bandwidth use
“Pollara 400 delivers the performance AI networks need while allowing customers to stay on familiar Ethernet-based fabrics,” an AMD representative stated.
