A comprehensive survey on test-time adaptation under distribution shifts

J Liang, R He, T Tan - International Journal of Computer Vision, 2024 - Springer
Abstract Machine learning methods strive to acquire a robust model during the training
process that can effectively generalize to test samples, even in the presence of distribution …

Adversarial deep learning: A survey on adversarial attacks and defense mechanisms on image classification

SY Khamaiseh, D Bagagem, A Al-Alaj… - IEEE …, 2022 - ieeexplore.ieee.org
The popularity of adapting deep neural networks (DNNs) in solving hard problems has
increased substantially. Specifically, in the field of computer vision, DNNs are becoming a …

Segpgd: An effective and efficient adversarial attack for evaluating and boosting segmentation robustness

J Gu, H Zhao, V Tresp, PHS Torr - European Conference on Computer …, 2022 - Springer
Deep neural network-based image classifications are vulnerable to adversarial
perturbations. The image classifications can be easily fooled by adding artificial small and …

Understanding robust overfitting of adversarial training and beyond

C Yu, B Han, L Shen, J Yu, C Gong… - International …, 2022 - proceedings.mlr.press
Robust overfitting widely exists in adversarial training of deep networks. The exact
underlying reasons for this are still not completely understood. Here, we explore the causes …

Generating transferable 3d adversarial point cloud via random perturbation factorization

B He, J Liu, Y Li, S Liang, J Li, X Jia… - Proceedings of the AAAI …, 2023 - ojs.aaai.org
Recent studies have demonstrated that existing deep neural networks (DNNs) on 3D point
clouds are vulnerable to adversarial examples, especially under the white-box settings …

Robust unlearnable examples: Protecting data against adversarial learning

S Fu, F He, Y Liu, L Shen, D Tao - arXiv preprint arXiv:2203.14533, 2022 - arxiv.org
The tremendous amount of accessible data in cyberspace face the risk of being
unauthorized used for training deep learning models. To address this concern, methods are …

Triangle attack: A query-efficient decision-based adversarial attack

X Wang, Z Zhang, K Tong, D Gong, K He, Z Li… - European conference on …, 2022 - Springer
Decision-based attack poses a severe threat to real-world applications since it regards the
target model as a black box and only accesses the hard prediction label. Great efforts have …

Learning defense transformations for counterattacking adversarial examples

J Li, S Zhang, J Cao, M Tan - Neural Networks, 2023 - Elsevier
Deep neural networks (DNNs) are vulnerable to adversarial examples with small
perturbations. Adversarial defense thus has been an important means which improves the …

Imitated detectors: Stealing knowledge of black-box object detectors

S Liang, A Liu, J Liang, L Li, Y Bai, X Cao - Proceedings of the 30th ACM …, 2022 - dl.acm.org
Deep neural networks have shown great potential in many practical applications, yet their
knowledge is at the risk of being stolen via exposed services (\eg APIs). In contrast to the …

Robust weight perturbation for adversarial training

C Yu, B Han, M Gong, L Shen, S Ge, B Du… - arXiv preprint arXiv …, 2022 - arxiv.org
Overfitting widely exists in adversarial robust training of deep networks. An effective remedy
is adversarial weight perturbation, which injects the worst-case weight perturbation during …