CAMOU: Learning physical vehicle camouflages to adversarially attack detectors in the wild

Y Zhang, H Foroosh, P David, B Gong - International Conference on …, 2018 - openreview.net
In this paper, we conduct an intriguing experimental study about the physical adversarial
attack on object detectors in the wild. In particular, we learn a camouflage pattern to hide …

Universal physical camouflage attacks on object detectors

L Huang, C Gao, Y Zhou, C Xie… - Proceedings of the …, 2020 - openaccess.thecvf.com
In this paper, we study physical adversarial attacks on object detectors in the wild. Previous
works mostly craft instance-dependent perturbations only for rigid or planar objects. To this …

Fca: Learning a 3d full-coverage vehicle camouflage for multi-view physical adversarial attack

D Wang, T Jiang, J Sun, W Zhou, Z Gong… - Proceedings of the …, 2022 - ojs.aaai.org
Physical adversarial attacks in object detection have attracted increasing attention.
However, most previous works focus on hiding the objects from the detector by generating …

Learning coated adversarial camouflages for object detectors

Y Duan, J Chen, X Zhou, J Zou, Z He, J Zhang… - arXiv preprint arXiv …, 2021 - arxiv.org
An adversary can fool deep neural network object detectors by generating adversarial
noises. Most of the existing works focus on learning local visible noises in an adversarial" …

The translucent patch: A physical and universal attack on object detectors

A Zolfi, M Kravchik, Y Elovici… - Proceedings of the …, 2021 - openaccess.thecvf.com
Physical adversarial attacks against object detectors have seen increasing success in recent
years. However, these attacks require direct access to the object of interest in order to apply …

Physical adversarial attack on vehicle detector in the carla simulator

T Wu, X Ning, W Li, R Huang, H Yang… - arXiv preprint arXiv …, 2020 - arxiv.org
In this paper, we tackle the issue of physical adversarial examples for object detectors in the
wild. Specifically, we proposed to generate adversarial patterns to be applied on vehicle …

Seeing isn't believing: Towards more robust adversarial attack against real world object detectors

Y Zhao, H Zhu, R Liang, Q Shen, S Zhang… - Proceedings of the 2019 …, 2019 - dl.acm.org
Recently Adversarial Examples (AEs) that deceive deep learning models have been a topic
of intense research interest. Compared with the AEs in the digital space, the physical …

Physical adversarial examples for object detectors

D Song, K Eykholt, I Evtimov, E Fernandes… - 12th USENIX workshop …, 2018 - usenix.org
Deep neural networks (DNNs) are vulnerable to adversarial examples—maliciously crafted
inputs that cause DNNs to make incorrect predictions. Recent work has shown that these …

Dynamic adversarial patch for evading object detection models

S Hoory, T Shapira, A Shabtai, Y Elovici - arXiv preprint arXiv:2010.13070, 2020 - arxiv.org
Recent research shows that neural networks models used for computer vision (eg, YOLO
and Fast R-CNN) are vulnerable to adversarial evasion attacks. Most of the existing real …

Standard detectors aren't (currently) fooled by physical adversarial stop signs

J Lu, H Sibai, E Fabry, D Forsyth - arXiv preprint arXiv:1710.03337, 2017 - arxiv.org
An adversarial example is an example that has been adjusted to produce the wrong label
when presented to a system at test time. If adversarial examples existed that could fool a …