Open source implementation of OpenCL automatically deploys code across numerous platforms, speeding machine learning and other jobs.
LLVM, the open source compiler framework that powers everything from Mozilla’s Rust language to Apple’s Swift, emerges in yet another significant role: an enabler of code deployment systems that target multiple classes of hardware for speeding up jobs like machine learning.
To write code that can run on CPUs, GPUs, ASICs, and FPGAs—hugely useful with machine learning apps—it’s best to use the likes of OpenCL, which allows a program to be written once, then automatically deployed across different types of hardware.
Read more at InfoWorld