Latent feature relation consistency for adversarial robustness

X Liu, H Kuang, H Liu, X Lin, Y Wu, R Ji - arXiv preprint arXiv:2303.16697, 2023 - arxiv.org
Deep neural networks have been applied in many computer vision tasks and achieved state-
of-the-art performance. However, misclassification will occur when DNN predicts adversarial …

Improving robustness to adversarial examples by encouraging discriminative features

C Agarwal, A Nguyen… - 2019 IEEE International …, 2019 - ieeexplore.ieee.org
Deep neural networks (DNNs) have achieved state-of-the-art results in various pattern
recognition tasks. However, they perform poorly on out-of-distribution adversarial examples …

Improving Adversarial Robustness via Feature Pattern Consistency Constraint

J Hu, J Ye, Z Feng, J Yang, S Liu, X Yu, L Jia… - arXiv preprint arXiv …, 2024 - arxiv.org
Convolutional Neural Networks (CNNs) are well-known for their vulnerability to adversarial
attacks, posing significant security concerns. In response to these threats, various defense …

Improving generalization of adversarial training via robust critical fine-tuning

K Zhu, X Hu, J Wang, X Xie… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Deep neural networks are susceptible to adversarial examples, posing a significant security
risk in critical applications. Adversarial Training (AT) is a well-established technique to …

Beyond Empirical Risk Minimization: Local Structure Preserving Regularization for Improving Adversarial Robustness

W Wei, J Zhou, Y Wu - arXiv preprint arXiv:2303.16861, 2023 - arxiv.org
It is broadly known that deep neural networks are susceptible to being fooled by adversarial
examples with perturbations imperceptible by humans. Various defenses have been …

Class Incremental Learning for Adversarial Robustness

S Cho, H Lee, C Kim - arXiv preprint arXiv:2312.03289, 2023 - arxiv.org
Adversarial training integrates adversarial examples during model training to enhance
robustness. However, its application in fixed dataset settings differs from real-world …

IB-RAR: Information Bottleneck as Regularizer for Adversarial Robustness

X Xu, G Perin, S Picek - 2023 53rd Annual IEEE/IFIP …, 2023 - ieeexplore.ieee.org
This paper proposes a novel method, IB-RAR, which uses Information Bottleneck (IB) to
strengthen adversarial robustness for both adversarial training and non-adversarial-trained …

Towards intrinsic adversarial robustness through probabilistic training

J Dong, L Yang, Y Wang, X Xie… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Modern deep neural networks have made numerous breakthroughs in real-world
applications, yet they remain vulnerable to some imperceptible adversarial perturbations …

Enhancing Adversarial Training with Feature Separability

Y Li, X Liu, H Xu, W Wang, J Tang - arXiv preprint arXiv:2205.00637, 2022 - arxiv.org
Deep Neural Network (DNN) are vulnerable to adversarial attacks. As a countermeasure,
adversarial training aims to achieve robustness based on the min-max optimization problem …

Training Neural Networks with Random Noise Images for Adversarial Robustness

JY Park, L Liu, J Li, J Liu - Proceedings of the 30th ACM International …, 2021 - dl.acm.org
Despite their high accuracy, deep neural networks (DNNs) are vulnerable to adversarial
examples. Currently, adversarial training is the mainstream defense approach against …