Adversarially robust distillation

M Goldblum, L Fowl, S Feizi, T Goldstein - Proceedings of the AAAI …, 2020 - aaai.org
Abstract Knowledge distillation is effective for producing small, high-performance neural
networks for classification, but these small networks are vulnerable to adversarial attacks …

Dealing with robustness of convolutional neural networks for image classification

P Arcaini, A Bombarda, S Bonfanti… - 2020 IEEE …, 2020 - ieeexplore.ieee.org
SW-based systems depend more and more on AI also for critical tasks. For instance, the use
of machine learning, especially for image recognition, is increasing ever more. As state-of …

Adversarial Pruning: A Survey and Benchmark of Pruning Methods for Adversarial Robustness

G Piras, M Pintor, A Demontis, B Biggio… - arXiv preprint arXiv …, 2024 - arxiv.org
Recent work has proposed neural network pruning techniques to reduce the size of a
network while preserving robustness against adversarial examples, ie, well-crafted inputs …

A Framework for Including Uncertainty in Robustness Evaluation of Bayesian Neural Network Classifiers

W Essbai, A Bombarda, S Bonfanti… - Proceedings of the 5th …, 2024 - dl.acm.org
Neural networks (NNs) play a crucial role in safety-critical fields, requiring robustness
assurance. Bayesian Neural Networks (BNNs) address data uncertainty, providing …

Adversarial Robustness and Robust Meta-Learning for Neural Networks

M Goldblum - 2020 - search.proquest.com
Despite the overwhelming success of neural networks for pattern recognition, these models
behave categorically different from humans. Adversarial examples, small perturbations …