THE RACE TO EXASCALE SUPERCOMPUTERS
Powering AI, medicine, weather, and more. By Jarred Walton
© VLADIMIR TIMOFEEVI/STOCK / GETTY IMAGES PLUS
SUPERCOMPUTERS are at the heart of modern life, powering many things that we take for granted. Imagine trying to find your way around an unfamiliar city without the help of Google Maps or Waze? And where would we get immediate answers to trivia questions without the likes of Alexa, Google, and Siri? Weather forecasts are no longer as accurate as staring into a crystal ball; disease and pharmaceutical research have sped up by orders of magnitude; we can get real-time translation of conversations in a different language, and we’re rapidly approaching the day when most vehicles on the roads will drive themselves.
What sort of hardware will power our future supercomputer overlords, and who are the biggest competitors in the race to exascale computing? Join us as we don our math nerd hats to discuss these marvels of engineering, how machine learning works, and why GPUs have become a critical part of improving speed and functionality. We’ll also look at where things may go next as companies look to improve efficiency and performance.
Forget Skynet, the Matrix, Tron’s Master Control Unit, and all the other depictions of AI taking over the world. Supercomputers are working to solve some of our toughest problems and to provide answers to questions we haven’t yet thought to ask.
SUPERCOMPUTERS have gone by various names over the years.
The original mainframe systems of the 1950s, 60s, and 70s were, in effect, supercomputers, though there weren’t personal computers until the 1980s. In modern terms, a supercomputer just means an installation with a much higher level of performance than your typical PC or even high-end server.
Supercomputers typically occupy an entire data center, and while they are usually made up of thousands of servers called nodes, they’re more than just a bunch of systems in the same room. All the individual nodes are linked together via high-speed networking solutions and are designed to work on massive data sets distributed across the nodes.
Because they are used for scientific research, supercomputers are typically classified by their performance in FLOPS (or flops), floating-point operations per second. But it’s not just any old floating-point calculation, it’s the 64-bit IEEE FP64 capabilities that are measured. Top500.org ranks the fastest and most efficient supercomputers twice per year, based on their performance in the Linpack test suite.