Graphcore, a start-up based in the UK, introduced its second-generation Intelligence Processing Unit (IPU) platform with greater processing power, more memory and built-in scalability for handling extremely large Machine Intelligence workloads.

Multiple blades can work in unison in massive datacenter-scale systems of up to 64,000 IPUs – an “IPU-POD” that a maximum configuration would represent 16 ExaFlops of Machine Intelligence compute power.
Graphcore has developed its own low-latency IPU-Fabric technology to connects IPUs across the entire datacenter. A dedicated IPU-Gateway chip delivers 2.8 Tbps of bandwidth for each IPU-Machine M2000. The overall bandwidth grows to many Petabits/sec when multiple IPU-Machine M2000 systems are connected together.
https://www.graphcore.ai/posts/introducing-second-generation-ipu-systems-for-ai-at-scale