Rethinking natural adversarial examples for classification models

X Li, J Li, T Dai, J Shi, J Zhu, X Hu - arXiv preprint arXiv:2102.11731, 2021 - arxiv.org
Recently, it was found that many real-world examples without intentional modifications can
fool machine learning models, and such examples are called" natural adversarial …

Natural adversarial objects

F Lau, N Subramani, S Harrison, A Kim… - arXiv preprint arXiv …, 2021 - arxiv.org
Although state-of-the-art object detection methods have shown compelling performance,
models often are not robust to adversarial attacks and out-of-distribution data. We introduce …

Natural adversarial examples

D Hendrycks, K Zhao, S Basart… - Proceedings of the …, 2021 - openaccess.thecvf.com
We introduce two challenging datasets that reliably cause machine learning model
performance to substantially degrade. The datasets are collected with a simple adversarial …

Pick-object-attack: Type-specific adversarial attack for object detection

OM Nezami, A Chaturvedi, M Dras, U Garain - Computer Vision and Image …, 2021 - Elsevier
Many recent studies have shown that deep neural models are vulnerable to adversarial
samples: images with imperceptible perturbations, for example, can fool image classifiers. In …

Gat: Generative adversarial training for adversarial example detection and robust classification

X Yin, S Kolouri, GK Rohde - arXiv preprint arXiv:1905.11475, 2019 - arxiv.org
The vulnerabilities of deep neural networks against adversarial examples have become a
significant concern for deploying these models in sensitive domains. Devising a definitive …

Traits & transferability of adversarial examples against instance segmentation & object detection

R Gurbaxani, S Mishra - arXiv preprint arXiv:1808.01452, 2018 - arxiv.org
Despite the recent advancements in deploying neural networks for image classification, it
has been found that adversarial examples are able to fool these models leading them to …

Exploring robustness connection between artificial and natural adversarial examples

A Agarwal, N Ratha, M Vatsa… - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Although recent deep neural network algorithm has shown tremendous success in several
computer vision tasks, their vulnerability against minute adversarial perturbations has raised …

Adversarial examples with transferred camouflage style for object detection

X Deng, Z Fang, Y Zheng, Y Wang… - Journal of Physics …, 2021 - iopscience.iop.org
Most of existing adversarial examples attacking methods for object detection models aim at
generating subtle perturbation which is invisible to human vision. However, some …

Adversarial examples based on object detection tasks: A survey

JX Mi, XD Wang, LF Zhou, K Cheng - Neurocomputing, 2023 - Elsevier
Deep learning plays a critical role in the applications of artificial intelligence. The trend of
processing images or videos as input data and pursuing execution efficiency in practical …

Controlling over-generalization and its effect on adversarial examples generation and detection

M Abbasi, A Rajabi, AS Mozafari, RB Bobba… - arXiv preprint arXiv …, 2018 - arxiv.org
Convolutional Neural Networks (CNNs) significantly improve the state-of-the-art for many
applications, especially in computer vision. However, CNNs still suffer from a tendency to …