The Biggest Shift in Supercomputing Since GPU Acceleration

52

If you followed what was underway at the International Supercomputing Conference (ISC) this week, you will already know this shift is deep learning. Just two years ago, we were fitting this into the broader HPC picture from separate hardware and algorithmic points of view. Today, we are convinced it will cause a fundamental rethink of how the largest supercomputers are built and how the simulations they host are executed. After all, the pressures on efficiency, performance, scalability, and programmability are mounting—and relatively little in the way of new thinking has been able to penetrate those challenges.

The early applications of deep learning in using approximation approach to HPC—taking experimental or supercomputer simulation data and using it to train a neural network, then turning that network around in inference mode to replace or augment a traditional simulation—are incredibly promising. This work in using the traditional HPC simulation as the basis for training is happening fast and broadly, which means a major shift is coming to HPC applications and hardware far quicker than some centers may be ready for. What is potentially at stake, at least for some application areas, is far-reaching. Overall compute resource usage goes down compared to traditional simulations, which drives efficiency, and in some cases, accuracy is improved. Ultimately, by allowing the simulation to become the training set, the exascale-capable resources can be used to scale a more informed simulation, or simply be used as the hardware base for a massively scalable neural network.

Read more at The Next Platform