Interpreting adversarial examples in deep learning: A review

S Han, C Lin, C Shen, Q Wang, X Guan - ACM Computing Surveys, 2023 - dl.acm.org
Deep learning technology is increasingly being applied in safety-critical scenarios but has
recently been found to be susceptible to imperceptible adversarial perturbations. This raises …

Survey on intrusion detection systems based on machine learning techniques for the protection of critical infrastructure

A Pinto, LC Herrera, Y Donoso, JA Gutierrez - Sensors, 2023 - mdpi.com
Industrial control systems (ICSs), supervisory control and data acquisition (SCADA) systems,
and distributed control systems (DCSs) are fundamental components of critical infrastructure …

Augmax: Adversarial composition of random augmentations for robust training

H Wang, C Xiao, J Kossaifi, Z Yu… - Advances in neural …, 2021 - proceedings.neurips.cc
Data augmentation is a simple yet effective way to improve the robustness of deep neural
networks (DNNs). Diversity and hardness are two complementary dimensions of data …

Causaladv: Adversarial robustness through the lens of causality

Y Zhang, M Gong, T Liu, G Niu, X Tian, B Han… - arXiv preprint arXiv …, 2021 - arxiv.org
The adversarial vulnerability of deep neural networks has attracted significant attention in
machine learning. As causal reasoning has an instinct for modelling distribution change, it is …

Better safe than sorry: Preventing delusive adversaries with adversarial training

L Tao, L Feng, J Yi, SJ Huang… - Advances in Neural …, 2021 - proceedings.neurips.cc
Delusive attacks aim to substantially deteriorate the test accuracy of the learning model by
slightly perturbing the features of correctly labeled training examples. By formalizing this …

Neural mean discrepancy for efficient out-of-distribution detection

X Dong, J Guo, A Li, WT Ting, C Liu… - Proceedings of the …, 2022 - openaccess.thecvf.com
Various approaches have been proposed for out-of-distribution (OOD) detection by
augmenting models, input examples, training set, and optimization objectives. Deviating …

Towards lightweight black-box attack against deep neural networks

C Sun, Y Zhang, W Chaoqun, Q Wang… - Advances in …, 2022 - proceedings.neurips.cc
Black-box attacks can generate adversarial examples without accessing the parameters of
target model, largely exacerbating the threats of deployed deep neural networks (DNNs) …

Detecting adversarial data by probing multiple perturbations using expected perturbation score

S Zhang, F Liu, J Yang, Y Yang, C Li… - … on machine learning, 2023 - proceedings.mlr.press
Adversarial detection aims to determine whether a given sample is an adversarial one
based on the discrepancy between natural and adversarial distributions. Unfortunately …

Probabilistic margins for instance reweighting in adversarial training

F Liu, B Han, T Liu, C Gong, G Niu… - Advances in …, 2021 - proceedings.neurips.cc
Reweighting adversarial data during training has been recently shown to improve
adversarial robustness, where data closer to the current decision boundaries are regarded …

Towards better robustness against common corruptions for unsupervised domain adaptation

Z Gao, K Huang, R Zhang, D Liu… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Recent studies have investigated how to achieve robustness for unsupervised domain
adaptation (UDA). While most efforts focus on adversarial robustness, ie how the model …