Adversarial attacks on YOLACT instance segmentation

Z Zhang, S Huang, X Liu, B Zhang, D Dong - Computers & Security, 2022 - Elsevier
Adversarial attacks have stimulated research interests in the field of deep learning security.
In terms of autonomous driving technology, instance segmentation can help autonomous …

Misleading attention and classification: an adversarial attack to fool object detection models in the real world

H Zhang, X Ma - Computers & Security, 2022 - Elsevier
Object detection is a hot topic in computer vision (CV), and it has many applications in
various security fields. However, many works have demonstrated that neural network-based …

Dynamic adversarial patch for evading object detection models

S Hoory, T Shapira, A Shabtai, Y Elovici - arXiv preprint arXiv:2010.13070, 2020 - arxiv.org
Recent research shows that neural networks models used for computer vision (eg, YOLO
and Fast R-CNN) are vulnerable to adversarial evasion attacks. Most of the existing real …

An improved shapeshifter method of generating adversarial examples for physical attacks on stop signs against faster r-cnns

S Huang, X Liu, X Yang, Z Zhang - Computers & Security, 2021 - Elsevier
Vehicles have increasingly deployed object detectors to perceive running conditions, and
deep learning networks have been widely adopted by those detectors. Growing neural …

An Adversarial Attack Method against Specified Objects Based on Instance Segmentation

D Lang, D Chen, S Li, Y He - Information, 2022 - mdpi.com
The deep model is widely used and has been demonstrated to have more hidden security
risks. An adversarial attack can bypass the traditional means of defense. By modifying the …

Beyond digital domain: Fooling deep learning based recognition system in physical world

K Yang, T Tsai, H Yu, TY Ho, Y Jin - … of the AAAI Conference on Artificial …, 2020 - aaai.org
Adversarial examples that can fool deep neural network (DNN) models in computer vision
present a growing threat. The current methods of launching adversarial attacks concentrate …

NaturalAE: Natural and robust physical adversarial examples for object detectors

M Xue, C Yuan, C He, J Wang, W Liu - Journal of Information Security and …, 2021 - Elsevier
Recently, many studies show that deep neural networks (DNNs) are susceptible to
adversarial examples, which are generated by adding imperceptible perturbations to the …

Playing against deep-neural-network-based object detectors: A novel bidirectional adversarial attack approach

X Li, Y Jiang, C Liu, S Liu, H Luo… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
In the fields of deep learning and computer vision, the security of object detection models
has received extensive attention. Revealing the security vulnerabilities resulting from …

Ipatch: A remote adversarial patch

Y Mirsky - Cybersecurity, 2023 - Springer
Applications such as autonomous vehicles and medical screening use deep learning
models to localize and identify hundreds of objects in a single frame. In the past, it has been …

Adversarial examples based on object detection tasks: A survey

JX Mi, XD Wang, LF Zhou, K Cheng - Neurocomputing, 2023 - Elsevier
Deep learning plays a critical role in the applications of artificial intelligence. The trend of
processing images or videos as input data and pursuing execution efficiency in practical …