Tuple-oriented compression for large-scale mini-batch stochastic gradient descent

F Li, L Chen, Y Zeng, A Kumar, X Wu… - Proceedings of the …, 2019 - dl.acm.org
Proceedings of the 2019 International Conference on Management of Data, 2019dl.acm.org
Data compression is a popular technique for improving the efficiency of data processing
workloads such as SQL queries and more recently, machine learning (ML) with classical
batch gradient methods. But the efficacy of such ideas for mini-batch stochastic gradient
descent (MGD), arguably the workhorse algorithm of modern ML, is an open question.
MGD's unique data access pattern renders prior art, including those designed for batch
gradient methods, less effective. We fill this crucial research gap by proposing a new …
Data compression is a popular technique for improving the efficiency of data processing workloads such as SQL queries and more recently, machine learning (ML) with classical batch gradient methods. But the efficacy of such ideas for mini-batch stochastic gradient descent (MGD), arguably the workhorse algorithm of modern ML, is an open question. MGD's unique data access pattern renders prior art, including those designed for batch gradient methods, less effective. We fill this crucial research gap by proposing a new lossless compression scheme we call tuple-oriented compression (TOC) that is inspired by an unlikely source, the string/ text compression scheme Lempel-Ziv-Welch, but tailored to MGD in a way that preserves tuple boundaries within mini-batches. We then present a suite of novel compressed matrix operation execution techniques tailored to the TOC compression scheme that operate directly over the compressed data representation and avoid decompression overheads. An extensive empirical evaluation with real-world datasets shows that TOC consistently achieves substantial compression ratios by up to 51x and reduces runtimes for MGD workloads by up to 10.2x in popular ML systems.
ACM Digital Library
以上显示的是最相近的搜索结果。 查看全部搜索结果