The lazy neuron phenomenon: On emergence of activation sparsity in transformers

Z Li, C You, S Bhojanapalli, D Li, AS Rawat… - arXiv preprint arXiv …, 2022 - arxiv.org
This paper studies the curious phenomenon for machine learning models with Transformer
architectures that their activation maps are sparse. By activation map we refer to the …

Can model compression improve nlp fairness

G Xu, Q Hu - arXiv preprint arXiv:2201.08542, 2022 - arxiv.org
Model compression techniques are receiving increasing attention; however, the effect of
compression on model fairness is still under explored. This is the first paper to examine the …

Robust low-rank training via approximate orthonormal constraints

D Savostianova, E Zangrando… - Advances in Neural …, 2024 - proceedings.neurips.cc
With the growth of model and data sizes, a broad effort has been made to design pruning
techniques that reduce the resource demand of deep learning pipelines, while retaining …

Relationship between Model Compression and Adversarial Robustness: A Review of Current Evidence

S Pavlitska, H Grolig, JM Zollner - 2023 IEEE Symposium …, 2023 - ieeexplore.ieee.org
Increasing the model capacity is a known approach to enhance the adversarial robustness
of deep learning networks. On the other hand, various model compression techniques …

Investigating Calibration and Corruption Robustness of Post-hoc Pruned Perception CNNs: An Image Classification Benchmark Study

P Mitra, G Schwalbe, N Klein - Proceedings of the IEEE/CVF …, 2024 - openaccess.thecvf.com
Abstract Convolutional Neural Networks (CNNs) have achieved state-of-the-art performance
in many computer vision tasks. However high computational and storage demands hinder …

Recall distortion in neural network pruning and the undecayed pruning algorithm

A Good, J Lin, X Yu, H Sieg… - Advances in …, 2022 - proceedings.neurips.cc
Pruning techniques have been successfully used in neural networks to trade accuracy for
sparsity. However, the impact of network pruning is not uniform: prior work has shown that …

Can pruning improve certified robustness of neural networks?

LI Zhangheng, T Chen, L Li, B Li… - Transactions on Machine …, 2022 - openreview.net
With the rapid development of deep learning, the sizes of deep neural networks are getting
larger beyond the affordability of hardware platforms. Given the fact that neural networks are …

Towards Compact and Robust Model Learning Under Dynamically Perturbed Environments

H Luo, Z Zhuang, Y Li, M Tan, C Chen… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Network pruning has been widely studied to reduce the complexity of deep neural networks
(DNNs) and hence speed up their inference. Unfortunately, most existing pruning methods …

POSTER: ML-Compass: A Comprehensive Assessment Framework for Machine Learning Models

Z Jin, Z Zhu, H Hu, M Xue, H Chen - Proceedings of the 2023 ACM Asia …, 2023 - dl.acm.org
Machine learning models have made significant breakthroughs across various domains.
However, it is crucial to assess these models to obtain a complete understanding of their …

[HTML][HTML] Towards global neural network abstractions with locally-exact reconstruction

E Manino, I Bessa, LC Cordeiro - Neural Networks, 2023 - Elsevier
Neural networks are a powerful class of non-linear functions. However, their black-box
nature makes it difficult to explain their behaviour and certify their safety. Abstraction …