T Long, Q Gao, L Xu, Z Zhou - Computers & Security, 2022 - Elsevier
Deep learning has been widely applied in various fields such as computer vision, natural language processing, and data mining. Although deep learning has achieved significant …
Large vision-language models (VLMs) such as GPT-4 have achieved unprecedented performance in response generation, especially with visual inputs, enabling more creative …
X Wang, K He - Proceedings of the IEEE/CVF conference on …, 2021 - openaccess.thecvf.com
Deep neural networks are vulnerable to adversarial examples that mislead the models with imperceptible perturbations. Though adversarial attacks have achieved incredible success …
Y Long, Q Zhang, B Zeng, L Gao, X Liu, J Zhang… - European conference on …, 2022 - Springer
For black-box attacks, the gap between the substitute model and the victim model is usually large, which manifests as a weak attack performance. Motivated by the observation that the …
Z Sha, Z Li, N Yu, Y Zhang - Proceedings of the 2023 ACM SIGSAC …, 2023 - dl.acm.org
Text-to-image generation models that generate images based on prompt descriptions have attracted an increasing amount of attention during the past few months. Despite their …
Deep neural networks (DNNs) are known to be vulnerable to adversarial examples. It is thus imperative to devise effective attack algorithms to identify the deficiencies of DNNs …
Z Wang, H Guo, Z Zhang, W Liu… - Proceedings of the …, 2021 - openaccess.thecvf.com
Transferability of adversarial examples is of central importance for attacking an unknown model, which facilitates adversarial attacks in more practical scenarios, eg, blackbox attacks …
Y Dong, T Pang, H Su, J Zhu - Proceedings of the IEEE/CVF …, 2019 - openaccess.thecvf.com
Deep neural networks are vulnerable to adversarial examples, which can mislead classifiers by adding imperceptible perturbations. An intriguing property of adversarial examples is …
Y Zhong, X Liu, D Zhai, J Jiang… - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Estimating the risk level of adversarial examples is essential for safely deploying machine learning models in the real world. One popular approach for physical-world attacks is to …