The Cerebras CS-2 features 850,000 AI optimized compute cores, 40GB of on-chip SRAM, 20 PB/s memory bandwidth and 220Pb/s interconnect, all enabled by purpose-built packaging, cooling, and power delivery. It is fed by 1.2 terabits of I/O across 12 100Gb Ethernet links. Every design choice has been made to accelerate deep learning, reducing training times and inference latencies by orders of magnitude.
The CS-2 is powered by the largest processor ever built — the industry’s only 2.6 trillion transistor silicon device. The Cerebras Wafer Scale Engine 2 (WSE-2) delivers more AI optimized compute cores, more fast memory, and more fabric bandwidth than any other deep learning processor in existence. At 46,225 mm^2, WSE-2 is 56 times larger than the largest graphics processing unit. The WSE-2 contains 123x more compute cores and 1,000x more high performance on chip memory.