A newly published whitepaper looks at the modification of the Huffman algorithm which permits uncompressed data to be decomposed into independently compressible and decompressible blocks, allowing for concurrent compression and decompression on multiple processors.
One major difficulty here in achieving good speedup with slim negative side effects that lossless data compression algorithms can generally not be, in their unaltered form, thought of as highly parallelizable. Indeed, if one wishes to express these algorithms in parallel, one often needs to consider tradeoffs between compression efficiency and performance. Nevertheless, we hope to effectively demonstrate that it is possible to come to a reasonable middle ground with respect to coding acceleration and efficiency loss.