Adversarial attacks and defenses in deep learning: From a perspective of cybersecurity

S Zhou, C Liu, D Ye, T Zhu, W Zhou, PS Yu - ACM Computing Surveys, 2022 - dl.acm.org
The outstanding performance of deep neural networks has promoted deep learning
applications in a broad set of domains. However, the potential risks caused by adversarial …

Differentiable rendering: A survey

H Kato, D Beker, M Morariu, T Ando… - arXiv preprint arXiv …, 2020 - arxiv.org
Deep neural networks (DNNs) have shown remarkable performance improvements on
vision-related tasks such as object detection or image segmentation. Despite their success …

Diffusion-based adversarial sample generation for improved stealthiness and controllability

H Xue, A Araujo, B Hu, Y Chen - Advances in Neural …, 2023 - proceedings.neurips.cc
Neural networks are known to be susceptible to adversarial samples: small variations of
natural examples crafted to deliberatelymislead the models. While they can be easily …

Neural thompson sampling

W Zhang, D Zhou, L Li, Q Gu - arXiv preprint arXiv:2010.00827, 2020 - arxiv.org
Thompson Sampling (TS) is one of the most effective algorithms for solving contextual multi-
armed bandit problems. In this paper, we propose a new algorithm, called Neural Thompson …

Adversarial t-shirt! evading person detectors in a physical world

K Xu, G Zhang, S Liu, Q Fan, M Sun, H Chen… - Computer Vision–ECCV …, 2020 - Springer
It is known that deep neural networks (DNNs) are vulnerable to adversarial attacks. The so-
called physical adversarial examples deceive DNN-based decision makers by attaching …

Evaluating the robustness of neural networks: An extreme value theory approach

TW Weng, H Zhang, PY Chen, J Yi, D Su, Y Gao… - arXiv preprint arXiv …, 2018 - arxiv.org
The robustness of neural networks to adversarial examples has received great attention due
to security implications. Despite various attack approaches to crafting visually imperceptible …

Physically realizable adversarial examples for lidar object detection

J Tu, M Ren, S Manivasagam… - Proceedings of the …, 2020 - openaccess.thecvf.com
Modern autonomous driving systems rely heavily on deep learning models to process point
cloud sensory data; meanwhile, deep models have been shown to be susceptible to …

Adversarial camouflage: Hiding physical-world attacks with natural styles

R Duan, X Ma, Y Wang, J Bailey… - Proceedings of the …, 2020 - openaccess.thecvf.com
Deep neural networks (DNNs) are known to be vulnerable to adversarial examples. Existing
works have mostly focused on either digital adversarial examples created via small and …

Advsim: Generating safety-critical scenarios for self-driving vehicles

J Wang, A Pun, J Tu, S Manivasagam… - Proceedings of the …, 2021 - openaccess.thecvf.com
As self-driving systems become better, simulating scenarios where the autonomy stack may
fail becomes more important. Traditionally, those scenarios are generated for a few scenes …

Adversarial laser beam: Effective physical-world attack to dnns in a blink

R Duan, X Mao, AK Qin, Y Chen… - Proceedings of the …, 2021 - openaccess.thecvf.com
Though it is well known that the performance of deep neural networks (DNNs) degrades
under certain light conditions, there exists no study on the threats of light beams emitted from …