H Wang, C Qin, Y Bai, Y Zhang, Y Fu - arXiv preprint arXiv:2103.06460, 2021 - arxiv.org
Neural network pruning typically removes connections or neurons from a pretrained converged model; while a new pruning paradigm, pruning at initialization (PaI), attempts to …
The growing energy and performance costs of deep learning have driven the community to reduce the size of neural networks by selectively pruning components. Similarly to their …
Can models with particular structure avoid being biased towards spurious correlation in out- of-distribution (OOD) generalization? Peters et al.(2016) provides a positive answer for …
The deployment constraints in practical applications necessitate the pruning of large-scale deep learning models, ie, promoting their weight sparsity. As illustrated by the Lottery Ticket …
Pruning large neural networks to create high-quality, independently trainable sparse masks, which can maintain similar performance to their dense counterparts, is very desirable due to …
Pruning, the task of sparsifying deep neural networks, received increasing attention recently. Although state-of-the-art pruning methods extract highly sparse models, they neglect two …
Large neural networks can be pruned to a small fraction of their original size, with little loss in accuracy, by following a time-consuming" train, prune, re-train" approach. Frankle & …
Railway bridges exposed to extreme environmental conditions can gradually lose their effective cross-section at critical locations and cause catastrophic failure. This paper has …
AH Gadhikar, S Mukherjee… - … Conference on Machine …, 2023 - proceedings.mlr.press
Random masks define surprisingly effective sparse neural network models, as has been shown empirically. The resulting sparse networks can often compete with dense …