A Primer on Nvidia-Docker — Where Containers Meet GPUs

178

Traditional programs cannot access GPUs directly. They need a special parallel programming interface to move computations to GPU. Nvidia, the most popular graphics card manufacturer, has created Compute Unified Device Architecture (CUDA), as a parallel computing platform and programming model for general computing on GPUs. With CUDA, developers will be able to dramatically speed up computing applications by harnessing the power of GPUs.

In GPU-enabled applications, the sequential part of the workload continues to run on the CPU — which is optimized for single-threaded performance — while the parallelized compute intensive part of the application is offloaded to run on thousands of GPU cores in parallel. To integrate CUDA, developers program in popular languages such as C, C++, Fortran, Python and MATLAB by expressing parallelism through extensions in the form of a few basic keywords.

Read more at The New Stack