
Today’s generative AI models exceed a trillion parameters, and their computational demands are fast outstripping the computational power of the GPUs, accelerators, and other chips (XPUs) powering hyperscale data centres. Networking individual XPUs can help bridge this gap. By distributing AI training and inference workloads over many chips, networked XPUs deliver a significantly more powerful