Deep neural networks (DNNs), while powerful, often suffer from a lack of interpretability and vulnerability to adversarial attacks. Concept bottleneck models (CBMs), which incorporate …
S Liu, Y Han - The Visual Computer, 2024 - Springer
Recent research has shown the vulnerability of deep networks to adversarial perturbations. Adversarial training and its variants have been shown to be effective defense algorithms …
Y Wang, L Chen, Z Yang, T Cao - International Journal of Computational …, 2024 - Springer
Adversarial patches, a type of adversarial example, pose serious security threats to deep neural networks (DNNs) by inducing erroneous outputs. Existing gradient stabilization …
In this paper, we propose an advanced method for adversarial training that focuses on leveraging the underlying structure of adversarial perturbation distributions. Unlike …
W Xie, J Yin, Z Chen - arXiv preprint arXiv:2411.07510, 2024 - arxiv.org
To address the issues of insufficient robustness, unstable features, and data noise interference in existing network attack detection and identification models, this paper …