Efficient neural network compression inspired by compressive sensing

W Gao, Y Guo, S Ma, G Li… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Traditional neural network compression (NNC) methods decrease the model size and
floating-point operations (FLOPs) in the manner of screening out unimportant weight …

Three ways that non-differentiability affects neural network training

SK Kumar - arXiv preprint arXiv:2401.08426, 2024 - arxiv.org
This paper investigates how non-differentiability affects three different aspects of the neural
network training process. We first analyze fully connected neural networks with ReLU …

Extended regularized dual averaging methods for stochastic optimization

JW Siegel, J Xu - arXiv preprint arXiv:1904.02316, 2019 - arxiv.org
We introduce a new algorithm, extended regularized dual averaging (XRDA), for solving
regularized stochastic optimization problems, which generalizes the regularized dual …

Reducing Model Complexity and Overcoming Overfitting: Deep Learning Algorithms and Medical Applications

J Chen - 2021 - etda.libraries.psu.edu
In this dissertation, we first propose the xRDA algorithm with an adaptively weighted $\ell^
1$-regularization scheme and momentum for training sparse neural networks. Then we …

Back to Basics: Efficient Network Compression via IMP

M Zimmer, S Pokutta, C Spiegel - openreview.net
Network pruning is a widely used technique for effectively compressing Deep Neural
Networks with little to no degradation in performance during inference. Iterative Magnitude …