While scientific research using FP64 remains important, Nvidia is betting big on the increasing importance of AI workloads. Its current Selene supercomputer already ranks among the top ten fastest for traditional workloads, but in AI performance Nvidia says it can do 2.8 exaflops. If Selene sounds potent, just wait for the upcoming Eos supercomputer, built on Nvidia’s latest Hopper H100 processor.
Eos will consist of 18
DGX H100 SuperPODs, each with 32 DGX H100 servers. The servers will be powered by dual AMD EPYC processors, but Nvidia doesn’t even make a point of mentioning what CPUs are used as they are child’s play compared to the GPU hardware.