November 18, 2009, 9:53 pm
Penguin Computing announced this week that it’s added GPU goodness to its Penguin On Demand host computing service (announced back in August)
Penguin Computing, experts in high performance computing solutions, today announced that Tesla GPU compute nodes are available in its Penguin on Demand (POD) system. Tesla equipped PODs will now provide a pay-as-you-go environment for researchers, scientists and engineers to explore the benefits of GPU computing in a hosted environment.
The POD system makes available on demand a computing infrastructure of highly optimized Linux clusters with specialized hardware interconnects and software configurations tuned specifically for HPC workloads. The addition of NVIDIA’s Tesla GPU Compute systems to POD now allows users to port their applications to CUDA or OpenCL and test their results very quickly and without capital costs.
Not up to speed on POD? More here.