Adversarial examples are not bugs, they are features

A Ilyas, S Santurkar, D Tsipras… - Advances in neural …, 2019 - proceedings.neurips.cc
Adversarial examples have attracted significant attention in machine learning, but the
reasons for their existence and pervasiveness remain unclear. We demonstrate that …

Adversarial examples make strong poisons

L Fowl, M Goldblum, P Chiang… - Advances in …, 2021 - proceedings.neurips.cc
The adversarial machine learning literature is largely partitioned into evasion attacks on
testing data and poisoning attacks on training data. In this work, we show that adversarial …

Understanding adversarial examples from the mutual influence of images and perturbations

C Zhang, P Benz, T Imtiaz… - Proceedings of the IEEE …, 2020 - openaccess.thecvf.com
A wide variety of works have explored the reason for the existence of adversarial examples,
but there is no consensus on the explanation. We propose to treat the DNN logits as a vector …

Availability attacks create shortcuts

D Yu, H Zhang, W Chen, J Yin, TY Liu - Proceedings of the 28th ACM …, 2022 - dl.acm.org
Availability attacks, which poison the training data with imperceptible perturbations, can
make the data not exploitable by machine learning algorithms so as to prevent unauthorized …

Better safe than sorry: Preventing delusive adversaries with adversarial training

L Tao, L Feng, J Yi, SJ Huang… - Advances in Neural …, 2021 - proceedings.neurips.cc
Delusive attacks aim to substantially deteriorate the test accuracy of the learning model by
slightly perturbing the features of correctly labeled training examples. By formalizing this …

Universal adversarial perturbations through the lens of deep steganography: Towards a fourier perspective

C Zhang, P Benz, A Karjauv, IS Kweon - Proceedings of the AAAI …, 2021 - ojs.aaai.org
The booming interest in adversarial attacks stems from a misalignment between human
vision and a deep neural network (DNN),\ie~ a human imperceptible perturbation fools the …

Closer look at the transferability of adversarial examples: How they fool different models differently

F Waseda, S Nishikawa, TN Le… - Proceedings of the …, 2023 - openaccess.thecvf.com
Deep neural networks are vulnerable to adversarial examples (AEs), which have adversarial
transferability: AEs generated for the source model can mislead another (target) model's …

Towards accuracy-fairness paradox: Adversarial example-based data augmentation for visual debiasing

Y Zhang, J Sang - Proceedings of the 28th ACM International …, 2020 - dl.acm.org
Machine learning fairness concerns about the biases towards certain protected or sensitive
group of people when addressing the target tasks. This paper studies the debiasing problem …

Selectivity and robustness of sparse coding networks

DM Paiton, CG Frye, SY Lundquist, JD Bowen… - Journal of …, 2020 - jov.arvojournals.org
We investigate how the population nonlinearities resulting from lateral inhibition and
thresholding in sparse coding networks influence neural response selectivity and …

Adversarial perturbation in remote sensing image recognition

S Ai, ASV Koe, T Huang - Applied Soft Computing, 2021 - Elsevier
Recent works have demonstrated that current deep neural networks suffer from small but
intentional perturbation during the testing phase of the model. Such perturbations aiming at …