Interview: Nvidia Looks to the Future of GPU Computing

41

Mary Branscombe over at ZDNet interviews Nvidia’s CEO, Jen-Hsun Huang about how the company’s its Tesla and Fermi-based GPUs are making parallel computing mainstream. According to Huang, the challenge of realizing parallel performance is data movement:

In computer graphics, the traditional APIs of the past, the ones that all failed are the ones that moved data back and forth. They’re all dead. We want the parallel computing environment that streams the data to the right place so that the processors can all access that large memory space, and move it around as little as we can. Conceptually that’s what we need to do. Some of the things that we are already working on, say, with InfiniBand, we want to feed directly into our GPU or we want to DMA into our GPU so that you don’t copy into system memory and then copy back out from system memory. So you want to figure out a way to move data as little as possible and now that you’ve moved it as little as you can, you just need to move it as fast as you can. There is just no replacement for terabytes per second.

Read more at insideHPC