Removing the Storage Bottleneck for AI

148

If the history of high performance computing has taught us anything, it is that we cannot focus too much on compute at the expense of storage and networking. Having all of the compute in the world doesn’t mean diddlysquat if the storage can’t get data to the compute elements – whatever they might be – in a timely fashion with good sustained performance.

Many organizations that have invested in GPU accelerated servers are finding this out the hard way when their performance comes up short when they get down to do work training their neural networks,…

The problem is that the datasets that are needed for storing the largely unstructured data that feeds into neural networks to let them do their statistical magic is growing at an exponential rate, and so is the computational requirement to chew on that data. 

Read more at The Next Platform