Advances in adversarial attacks and defenses in computer vision: A survey

N Akhtar, A Mian, N Kardan, M Shah - IEEE Access, 2021 - ieeexplore.ieee.org
Deep Learning is the most widely used tool in the contemporary field of computer vision. Its
ability to accurately solve complex problems is employed in vision research to learn deep …

Naturalistic physical adversarial patch for object detectors

YCT Hu, BH Kung, DS Tan, JC Chen… - Proceedings of the …, 2021 - openaccess.thecvf.com
Most prior works on physical adversarial attacks mainly focus on the attack performance but
seldom enforce any restrictions over the appearance of the generated adversarial patches …

Badencoder: Backdoor attacks to pre-trained encoders in self-supervised learning

J Jia, Y Liu, NZ Gong - 2022 IEEE Symposium on Security and …, 2022 - ieeexplore.ieee.org
Self-supervised learning in computer vision aims to pre-train an image encoder using a
large amount of unlabeled images or (image, text) pairs. The pre-trained image encoder can …

Threat of adversarial attacks on deep learning in computer vision: A survey

N Akhtar, A Mian - Ieee Access, 2018 - ieeexplore.ieee.org
Deep learning is at the heart of the current rise of artificial intelligence. In the field of
computer vision, it has become the workhorse for applications ranging from self-driving cars …

Blind backdoors in deep learning models

E Bagdasaryan, V Shmatikov - 30th USENIX Security Symposium …, 2021 - usenix.org
We investigate a new method for injecting backdoors into machine learning models, based
on compromising the loss-value computation in the model-training code. We use it to …

Shape matters: deformable patch attack

Z Chen, B Li, S Wu, J Xu, S Ding, W Zhang - European conference on …, 2022 - Springer
Though deep neural networks (DNNs) have demonstrated excellent performance in
computer vision, they are susceptible and vulnerable to carefully crafted adversarial …

Sok: Certified robustness for deep neural networks

L Li, T Xie, B Li - 2023 IEEE symposium on security and privacy …, 2023 - ieeexplore.ieee.org
Great advances in deep neural networks (DNNs) have led to state-of-the-art performance on
a wide range of tasks. However, recent studies have shown that DNNs are vulnerable to …

Towards practical certifiable patch defense with vision transformer

Z Chen, B Li, J Xu, S Wu, S Ding… - Proceedings of the …, 2022 - openaccess.thecvf.com
Patch attacks, one of the most threatening forms of physical attack in adversarial examples,
can lead networks to induce misclassification by modifying pixels arbitrarily in a continuous …

Securing the future: A comprehensive review of security challenges and solutions in advanced driver assistance systems

AA Mehta, AA Padaria, DJ Bavisi, V Ukani… - IEEE …, 2023 - ieeexplore.ieee.org
Advanced Driver Assistance Systems (ADAS) are advanced technologies that assist drivers
with vehicle operation and navigation. Recent improvements and brisk expansion in the …

Certified training: Small boxes are all you need

MN Müller, F Eckert, M Fischer, M Vechev - arXiv preprint arXiv …, 2022 - arxiv.org
To obtain, deterministic guarantees of adversarial robustness, specialized training methods
are used. We propose, SABR, a novel such certified training method, based on the key …