Transcending Adversarial Perturbations: Manifold-Aided Adversarial Examples with Legitimate Semantics

S Li, X Jiang, X Ma - arXiv preprint arXiv:2402.03095, 2024 - arxiv.org
Deep neural networks were significantly vulnerable to adversarial examples manipulated by
malicious tiny perturbations. Although most conventional adversarial attacks ensured the …

[PDF][PDF] Boosting Adversarial Training with Learnable Distribution.

K Chen, J Wang, JM Adeke, G Liu… - Computers, Materials & …, 2024 - cdn.techscience.cn
In recent years, various adversarial defense methods have been proposed to improve the
robustness of deep neural networks. Adversarial training is one of the most potent methods …