On evaluating adversarial robustness of large vision-language models

Y Zhao, T Pang, C Du, X Yang, C Li… - Advances in …, 2024 - proceedings.neurips.cc
Large vision-language models (VLMs) such as GPT-4 have achieved unprecedented
performance in response generation, especially with visual inputs, enabling more creative …

A survey of robustness and safety of 2d and 3d deep learning models against adversarial attacks

Y Li, B Xie, S Guo, Y Yang, B Xiao - ACM Computing Surveys, 2024 - dl.acm.org
Benefiting from the rapid development of deep learning, 2D and 3D computer vision
applications are deployed in many safe-critical systems, such as autopilot and identity …

Content-based unrestricted adversarial attack

Z Chen, B Li, S Wu, K Jiang, S Ding… - Advances in Neural …, 2024 - proceedings.neurips.cc
Unrestricted adversarial attacks typically manipulate the semantic content of an image (eg,
color or texture) to create adversarial examples that are both effective and photorealistic …

Boosting adversarial transferability by block shuffle and rotation

K Wang, X He, W Wang… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Adversarial examples mislead deep neural networks with imperceptible perturbations and
have brought significant threats to deep learning. An important aspect is their transferability …

Interpretability of neural networks based on game-theoretic interactions

H Zhou, J Ren, H Deng, X Cheng, J Zhang… - Machine Intelligence …, 2024 - Springer
This paper introduces the system of game-theoretic interactions, which connects both the
explanation of knowledge encoded in a deep neural networks (DNN) and the explanation of …

Transferable multimodal attack on vision-language pre-training models

H Wang, K Dong, Z Zhu, H Qin, A Liu, X Fang… - 2024 IEEE Symposium …, 2024 - computer.org
Abstract Vision-Language Pre-training (VLP) models have achieved remarkable success in
practice, while easily being misled by adversarial attack. Though harmful, adversarial …

Towards evaluating transfer-based attacks systematically, practically, and fairly

Q Li, Y Guo, W Zuo, H Chen - Advances in Neural …, 2024 - proceedings.neurips.cc
The adversarial vulnerability of deep neural networks (DNNs) has drawn great attention due
to the security risk of applying these models in real-world applications. Based on …

Improving adversarial transferability via intermediate-level perturbation decay

Q Li, Y Guo, W Zuo, H Chen - Advances in Neural …, 2024 - proceedings.neurips.cc
Intermediate-level attacks that attempt to perturb feature representations following an
adversarial direction drastically have shown favorable performance in crafting transferable …

Blurred-dilated method for adversarial attacks

Y Deng, W Wu, J Zhang… - Advances in Neural …, 2024 - proceedings.neurips.cc
Deep neural networks (DNNs) are vulnerable to adversarial attacks, which lead to incorrect
predictions. In black-box settings, transfer attacks can be conveniently used to generate …

A theory of transfer-based black-box attacks: explanation and implications

Y Chen, W Liu - Advances in Neural Information Processing …, 2024 - proceedings.neurips.cc
Transfer-based attacks are a practical method of black-box adversarial attacks, in which the
attacker aims to craft adversarial examples from a source (surrogate) model that is …