Advances in adversarial attacks and defenses in computer vision: A survey

N Akhtar, A Mian, N Kardan, M Shah - IEEE Access, 2021 - ieeexplore.ieee.org
Deep Learning is the most widely used tool in the contemporary field of computer vision. Its
ability to accurately solve complex problems is employed in vision research to learn deep …

A comprehensive review on deep learning algorithms: Security and privacy issues

M Tayyab, M Marjani, NZ Jhanjhi, IAT Hashem… - Computers & …, 2023 - Elsevier
Abstract Machine Learning (ML) algorithms are used to train the machines to perform
various complicated tasks that begin to modify and improve with experiences. It has become …

Naturalistic physical adversarial patch for object detectors

YCT Hu, BH Kung, DS Tan, JC Chen… - Proceedings of the …, 2021 - openaccess.thecvf.com
Most prior works on physical adversarial attacks mainly focus on the attack performance but
seldom enforce any restrictions over the appearance of the generated adversarial patches …

Threat of adversarial attacks on deep learning in computer vision: A survey

N Akhtar, A Mian - Ieee Access, 2018 - ieeexplore.ieee.org
Deep learning is at the heart of the current rise of artificial intelligence. In the field of
computer vision, it has become the workhorse for applications ranging from self-driving cars …

Deep learning-based autonomous driving systems: A survey of attacks and defenses

Y Deng, T Zhang, G Lou, X Zheng… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
The rapid development of artificial intelligence, especially deep learning technology, has
advanced autonomous driving systems (ADSs) by providing precise control decisions to …

Query-efficient decision-based black-box patch attack

Z Chen, B Li, S Wu, S Ding… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Deep neural networks (DNNs) have been showed to be highly vulnerable to imperceptible
adversarial perturbations. As a complementary type of adversary, patch attacks that …

Sibling-attack: Rethinking transferable adversarial attacks against face recognition

Z Li, B Yin, T Yao, J Guo, S Ding… - Proceedings of the …, 2023 - openaccess.thecvf.com
A hard challenge in developing practical face recognition (FR) attacks is due to the black-
box nature of the target FR model, ie, inaccessible gradient and parameter information to …

Scale-up: An efficient black-box input-level backdoor detection via analyzing scaled prediction consistency

J Guo, Y Li, X Chen, H Guo, L Sun, C Liu - arXiv preprint arXiv:2302.03251, 2023 - arxiv.org
Deep neural networks (DNNs) are vulnerable to backdoor attacks, where adversaries
embed a hidden backdoor trigger during the training process for malicious prediction …

Cat: Closed-loop adversarial training for safe end-to-end driving

L Zhang, Z Peng, Q Li, B Zhou - Conference on Robot …, 2023 - proceedings.mlr.press
Driving safety is a top priority for autonomous vehicles. Orthogonal to prior work handling
accident-prone traffic events by algorithm designs at the policy level, we investigate a …

Vehicle trajectory prediction works, but not everywhere

M Bahari, S Saadatnejad, A Rahimi… - Proceedings of the …, 2022 - openaccess.thecvf.com
Vehicle trajectory prediction is nowadays a fundamental pillar of self-driving cars. Both the
industry and research communities have acknowledged the need for such a pillar by …