The Engine of HPC and Machine Learning

232

There is no question right now that if you have a big computing job in either high performance computing – the colloquial name for traditional massively parallel simulation and modeling applications – or in machine learning – the set of statistical analysis routines with feedback loops that can do identification and transformation tasks that used to be solely the realm of humans – then an Nvidia GPU accelerator is the engine of choice to run that work at the best efficiency.

It is usually difficult to make such clean proclamations in the IT industry, with so many different kinds of compute available. But Nvidia is in a unique position, and one that it has earned through more than a decade of intense engineering, where it really does not have effective competition in the compute areas where it plays.

Parallel routines written in C, C++, or Fortran were offloaded from CPUs to GPUs in the first place because the CPUs did not have sufficient memory bandwidth to handle these routines. 

Read more at The Next Platform