The newly emerged machine learning (eg, deep learning) methods have become a strong driving force to revolutionize a wide range of industries, such as smart healthcare, financial …
X Wang, K He - Proceedings of the IEEE/CVF conference on …, 2021 - openaccess.thecvf.com
Deep neural networks are vulnerable to adversarial examples that mislead the models with imperceptible perturbations. Though adversarial attacks have achieved incredible success …
X Jia, Y Zhang, B Wu, K Ma… - Proceedings of the …, 2022 - openaccess.thecvf.com
Adversarial training (AT) is always formulated as a minimax problem, of which the performance depends on the inner optimization that involves the generation of adversarial …
Abstract We propose the Square Attack, a score-based black-box l_2 l 2-and l_ ∞ l∞- adversarial attack that does not rely on local gradient information and thus is not affected by …
X Wang, X He, J Wang, K He - Proceedings of the IEEE/CVF …, 2021 - openaccess.thecvf.com
Deep neural networks are known to be extremely vulnerable to adversarial examples under white-box setting. Moreover, the malicious adversaries crafted on the surrogate (source) …
X Wang, Z Zhang, J Zhang - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
Given the severe vulnerability of Deep Neural Networks (DNNs) against adversarial examples, there is an urgent need for an effective adversarial attack to identify the …
Y Dong, QA Fu, X Yang, T Pang… - proceedings of the …, 2020 - openaccess.thecvf.com
Deep neural networks are vulnerable to adversarial examples, which becomes one of the most important research problems in the development of deep learning. While a lot of efforts …
Deep learning technology is increasingly being applied in safety-critical scenarios but has recently been found to be susceptible to imperceptible adversarial perturbations. This raises …
Although deep neural networks (DNNs) have made rapid progress in recent years, they are vulnerable in adversarial environments. A malicious backdoor could be embedded in a …