Correctness-preserving compression of datasets and neural network models

V Joseph, N Chalapathi, A Bhaskara… - 2020 IEEE/ACM 4th …, 2020 - ieeexplore.ieee.org
2020 IEEE/ACM 4th International Workshop on Software Correctness …, 2020ieeexplore.ieee.org
Neural networks deployed on edge devices must be efficient both in terms of their model
size and the amount of data movement they cause when classifying inputs. These
efficiencies are typically achieved through model compression: pruning a fully trained
network model by zeroing out the weights. Given the overall challenge of neural network
correctness, we argue that focusing on correctness preservation may allow the community to
make measurable progress. We present a state-of-the-art model compression framework …
Neural networks deployed on edge devices must be efficient both in terms of their model size and the amount of data movement they cause when classifying inputs. These efficiencies are typically achieved through model compression: pruning a fully trained network model by zeroing out the weights. Given the overall challenge of neural network correctness, we argue that focusing on correctness preservation may allow the community to make measurable progress. We present a state-of-the-art model compression framework called Condensa around which we have launched correctness preservation studies. After presenting Condensa, we describe our initial efforts at understanding the effect of model compression in semantic terms, going beyond the top n% accuracy that Condensa is currently based on. We also take up the relatively unexplored direction of data compression that may help reduce data movement. We report preliminary results of learning from decompressed data to understand the effects of compression artifacts. Learning without decompressing input data also holds promise in terms of boosting efficiency, and we also report preliminary results in this regard. Our experiments centered around a state-of-the-art model compression framework called Condensa and two data compression algorithms, namely JPEG and ZFP, demonstrate the potential for employing model-and dataset compression without adversely affecting correctness.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果