H Wang, K Dong, Z Zhu, H Qin, A Liu, X Fang… - 2024 IEEE Symposium …, 2024 - computer.org
Abstract Vision-Language Pre-training (VLP) models have achieved remarkable success in practice, while easily being misled by adversarial attack. Though harmful, adversarial …
X Dong, R Wang, S Liang, A Liu, L Jing - Proceedings of the 31st ACM …, 2023 - dl.acm.org
Billions of people are sharing their daily live images on social media everyday. However, malicious collectors use deep face recognition systems to easily steal their biometric …
White box adversarial perturbations are generated via iterative optimization algorithms most often by minimizing an adversarial loss on a ℓ p neighborhood of the original image, the so …
J Pang, C Yuan, Z Xia, X Li, Z Fu - Knowledge-Based Systems, 2024 - Elsevier
In recent years, deep learning has gained widespread application across diverse fields, including image classification and machine translation. Nevertheless, the emergence of …
J Li, Z Wang, J Li - … of the IEEE/CVF Conference on …, 2024 - openaccess.thecvf.com
Adversarial patch attacks which can mislead deep learning models and the human eye in both the digital and physical domains have led to a trust crisis. Traditional approaches to …
W Zhu, Y Sun, J Liu, Y Cheng, X Ji, W Xu - arXiv preprint arXiv:2401.00151, 2023 - arxiv.org
The proliferation of images captured from millions of cameras and the advancement of facial recognition (FR) technology have made the abuse of FR a severe privacy threat. Existing …
W Xie, Z Niu, Q Lin, S Song… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Existing studies have shown that malicious and imperceptible adversarial samples may significantly weaken the reliability and validity of deep learning systems. Since gradient …
Deep neural networks in the area of information security are facing a severe threat from adversarial examples (AEs). Existing methods of AE generation use two optimization …
X Gao, J Liu - International Conference on Artificial Neural Networks, 2023 - Springer
Abstract Deep Neural Networks (DNNs) are susceptible to attacks by adversarial examples, which could cause serious consequences in safety-critical systems. Towards recent studies …