Addressing neural network robustness with mixup and targeted labeling adversarial training

A Laugros, A Caplier, M Ospici - … 2020 Workshops: Glasgow, UK, August 23 …, 2020 - Springer
Abstract Despite their performance, Artificial Neural Networks are not reliable enough for
most of industrial applications. They are sensitive to noises, rotations, blurs and adversarial …

Adversarial examples are a natural consequence of test error in noise

N Ford, J Gilmer, N Carlini, D Cubuk - arXiv preprint arXiv:1901.10513, 2019 - arxiv.org
Over the last few years, the phenomenon of adversarial examples---maliciously constructed
inputs that fool trained machine learning models---has captured the attention of the research …

A hierarchical assessment of adversarial severity

G Jeanneret, JC Pérez… - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
Adversarial Robustness is a growing field that evidences the brittleness of neural networks.
Although the literature on adversarial robustness is vast, a dimension is missing in these …

Are adversarial robustness and common perturbation robustness independant attributes?

A Laugros, A Caplier, M Ospici - Proceedings of the IEEE …, 2019 - openaccess.thecvf.com
Neural Networks have been shown to be sensitive to common perturbations such as blur,
Gaussian noise, rotations, etc. They are also vulnerable to some artificial malicious …

Benchmarking neural network robustness to common corruptions and perturbations

D Hendrycks, T Dietterich - arXiv preprint arXiv:1903.12261, 2019 - arxiv.org
In this paper we establish rigorous benchmarks for image classifier robustness. Our first
benchmark, ImageNet-C, standardizes and expands the corruption robustness topic, while …

Pda: Progressive data augmentation for general robustness of deep neural networks

H Yu, A Liu, X Liu, G Li, P Luo, R Cheng, J Yang… - arXiv preprint arXiv …, 2019 - arxiv.org
Adversarial images are designed to mislead deep neural networks (DNNs), attracting great
attention in recent years. Although several defense strategies achieved encouraging …

Noise is inside me! generating adversarial perturbations with noise derived from natural filters

A Agarwal, M Vatsa, R Singh… - Proceedings of the …, 2020 - openaccess.thecvf.com
Deep learning solutions are vulnerable to adversarial perturbations and can lead a" frog"
image to be misclassified as a" deer" or random pattern into" guitar". Adversarial attack …

Training robust deep neural networks via adversarial noise propagation

A Liu, X Liu, H Yu, C Zhang, Q Liu… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
In practice, deep neural networks have been found to be vulnerable to various types of
noise, such as adversarial examples and corruption. Various adversarial defense methods …

Fda: Feature disruptive attack

A Ganeshan, V BS, RV Babu - Proceedings of the IEEE/CVF …, 2019 - openaccess.thecvf.com
Abstract Though Deep Neural Networks (DNN) show excellent performance across various
computer vision tasks, several works show their vulnerability to adversarial samples, ie …

Certified defenses against adversarial examples

A Raghunathan, J Steinhardt, P Liang - arXiv preprint arXiv:1801.09344, 2018 - arxiv.org
While neural networks have achieved high accuracy on standard image classification
benchmarks, their accuracy drops to nearly zero in the presence of small adversarial …