Adversarial sample detection for deep neural network through model mutation testing

J Wang, G Dong, J Sun, X Wang… - 2019 IEEE/ACM 41st …, 2019 - ieeexplore.ieee.org
Deep neural networks (DNN) have been shown to be useful in a wide range of applications.
However, they are also known to be vulnerable to adversarial samples. By transforming a …

Practical evaluation of adversarial robustness via adaptive auto attack

Y Liu, Y Cheng, L Gao, X Liu… - Proceedings of the …, 2022 - openaccess.thecvf.com
Defense models against adversarial attacks have grown significantly, but the lack of
practical evaluation methods has hindered progress. Evaluation can be defined as looking …

Towards adversarially robust object detection

H Zhang, J Wang - Proceedings of the IEEE/CVF …, 2019 - openaccess.thecvf.com
Object detection is an important vision task and has emerged as an indispensable
component in many vision system, rendering its robustness as an increasingly important …

Sparse dnns with improved adversarial robustness

Y Guo, C Zhang, C Zhang… - Advances in neural …, 2018 - proceedings.neurips.cc
Deep neural networks (DNNs) are computationally/memory-intensive and vulnerable to
adversarial attacks, making them prohibitive in some real-world applications. By converting …

Do wider neural networks really help adversarial robustness?

B Wu, J Chen, D Cai, X He… - Advances in Neural …, 2021 - proceedings.neurips.cc
Adversarial training is a powerful type of defense against adversarial examples. Previous
empirical results suggest that adversarial training requires wider networks for better …

Adversarial examples are a natural consequence of test error in noise

N Ford, J Gilmer, N Carlini, D Cubuk - arXiv preprint arXiv:1901.10513, 2019 - arxiv.org
Over the last few years, the phenomenon of adversarial examples---maliciously constructed
inputs that fool trained machine learning models---has captured the attention of the research …

Guided adversarial attack for evaluating and enhancing adversarial defenses

G Sriramanan, S Addepalli… - Advances in Neural …, 2020 - proceedings.neurips.cc
Advances in the development of adversarial attacks have been fundamental to the progress
of adversarial defense research. Efficient and effective attacks are crucial for reliable …

An adaptive model ensemble adversarial attack for boosting adversarial transferability

B Chen, J Yin, S Chen, B Chen… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
While the transferability property of adversarial examples allows the adversary to perform
black-box attacks ie, the attacker has no knowledge about the target model), the transfer …

Image shortcut squeezing: Countering perturbative availability poisons with compression

Z Liu, Z Zhao, M Larson - International conference on …, 2023 - proceedings.mlr.press
Perturbative availability poisoning (PAP) adds small changes to images to prevent their use
for model training. Current research adopts the belief that practical and effective approaches …

The limitations of adversarial training and the blind-spot attack

H Zhang, H Chen, Z Song, D Boning, IS Dhillon… - arXiv preprint arXiv …, 2019 - arxiv.org
The adversarial training procedure proposed by Madry et al.(2018) is one of the most
effective methods to defend against adversarial examples in deep neural networks (DNNs) …