Robust physical-world attacks on deep learning visual classification

K Eykholt, I Evtimov, E Fernandes… - Proceedings of the …, 2018 - openaccess.thecvf.com
Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to
adversarial examples, resulting from small-magnitude perturbations added to the input …

[PDF][PDF] Robust physical-world attacks on machine learning models

I Evtimov, K Eykholt, E Fernandes… - arXiv preprint arXiv …, 2017 - s3.observador.pt
Deep neural network-based classifiers are known to be vulnerable to adversarial examples
that can fool them into misclassifying their input through the addition of small-magnitude …

Physgan: Generating physical-world-resilient adversarial examples for autonomous driving

Z Kong, J Guo, A Li, C Liu - … of the IEEE/CVF conference on …, 2020 - openaccess.thecvf.com
Abstract Although Deep neural networks (DNNs) are being pervasively used in vision-based
autonomous driving systems, they are found vulnerable to adversarial attacks where small …

A survey on physical adversarial attack in computer vision

D Wang, W Yao, T Jiang, G Tang, X Chen - arXiv preprint arXiv …, 2022 - arxiv.org
Over the past decade, deep learning has revolutionized conventional tasks that rely on hand-
craft feature extraction with its strong feature learning capability, leading to substantial …

Adversarial examples: attacks and defenses in the physical world

H Ren, T Huang, H Yan - International Journal of Machine Learning and …, 2021 - Springer
Deep learning technology has become an important branch of artificial intelligence.
However, researchers found that deep neural networks, as the core algorithm of deep …

Physical adversarial attack meets computer vision: A decade survey

H Wei, H Tang, X Jia, Z Wang, H Yu, Z Li… - arXiv preprint arXiv …, 2022 - arxiv.org
Despite the impressive achievements of Deep Neural Networks (DNNs) in computer vision,
their vulnerability to adversarial attacks remains a critical concern. Extensive research has …

Adversarial camera stickers: A physical camera-based attack on deep learning systems

J Li, F Schmidt, Z Kolter - International conference on …, 2019 - proceedings.mlr.press
Recent work has documented the susceptibility of deep learning systems to adversarial
examples, but most such attacks directly manipulate the digital input to a classifier. Although …

A self-supervised approach for adversarial robustness

M Naseer, S Khan, M Hayat… - Proceedings of the …, 2020 - openaccess.thecvf.com
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs)
based vision systems eg, for classification, segmentation and object detection. The …

Benchmarking adversarial robustness on image classification

Y Dong, QA Fu, X Yang, T Pang… - proceedings of the …, 2020 - openaccess.thecvf.com
Deep neural networks are vulnerable to adversarial examples, which becomes one of the
most important research problems in the development of deep learning. While a lot of efforts …

Adversarial camouflage: Hiding physical-world attacks with natural styles

R Duan, X Ma, Y Wang, J Bailey… - Proceedings of the …, 2020 - openaccess.thecvf.com
Deep neural networks (DNNs) are known to be vulnerable to adversarial examples. Existing
works have mostly focused on either digital adversarial examples created via small and …