A systematic survey of prompt engineering on vision-language foundation models

J Gu, Z Han, S Chen, A Beirami, B He, G Zhang… - arXiv preprint arXiv …, 2023 - arxiv.org
Prompt engineering is a technique that involves augmenting a large pre-trained model with
task-specific hints, known as prompts, to adapt the model to new tasks. Prompts can be …

Backdoor defense via adaptively splitting poisoned dataset

K Gao, Y Bai, J Gu, Y Yang… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Backdoor defenses have been studied to alleviate the threat of deep neural networks
(DNNs) being backdoor attacked and thus maliciously altered. Since DNNs usually adopt …

Generating transferable 3d adversarial point cloud via random perturbation factorization

B He, J Liu, Y Li, S Liang, J Li, X Jia… - Proceedings of the AAAI …, 2023 - ojs.aaai.org
Recent studies have demonstrated that existing deep neural networks (DNNs) on 3D point
clouds are vulnerable to adversarial examples, especially under the white-box settings …

Muter: Machine unlearning on adversarially trained models

J Liu, M Xue, J Lou, X Zhang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract Machine unlearning is an emerging task of removing the influence of selected
training datapoints from a trained model upon data deletion requests, which echoes the …

A light recipe to train robust vision transformers

E Debenedetti, V Sehwag… - 2023 IEEE Conference on …, 2023 - ieeexplore.ieee.org
In this paper, we ask whether Vision Transformers (ViTs) can serve as an underlying
architecture for improving the adversarial robustness of machine learning models against …

Benchmarking robustness of adaptation methods on pre-trained vision-language models

S Chen, J Gu, Z Han, Y Ma, P Torr… - Advances in Neural …, 2024 - proceedings.neurips.cc
Various adaptation methods, such as LoRA, prompts, and adapters, have been proposed to
enhance the performance of pre-trained vision-language models in specific domains. As test …

A survey on transferability of adversarial examples across deep neural networks

J Gu, X Jia, P de Jorge, W Yu, X Liu, A Ma… - arXiv preprint arXiv …, 2023 - arxiv.org
The emergence of Deep Neural Networks (DNNs) has revolutionized various domains,
enabling the resolution of complex tasks spanning image recognition, natural language …

Improving robustness of vision transformers by reducing sensitivity to patch corruptions

Y Guo, D Stutz, B Schiele - … of the IEEE/CVF Conference on …, 2023 - openaccess.thecvf.com
Despite their success, vision transformers still remain vulnerable to image corruptions, such
as noise or blur. Indeed, we find that the vulnerability mainly stems from the unstable self …

An image is worth 1000 lies: Adversarial transferability across prompts on vision-language models

H Luo, J Gu, F Liu, P Torr - arXiv preprint arXiv:2403.09766, 2024 - arxiv.org
Different from traditional task-specific vision models, recent large VLMs can readily adapt to
different vision tasks by simply using different textual instructions, ie, prompts. However, a …

Improving fast adversarial training with prior-guided knowledge

X Jia, Y Zhang, X Wei, B Wu, K Ma… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Fast adversarial training (FAT) is an efficient method to improve robustness in white-box
attack scenarios. However, the original FAT suffers from catastrophic overfitting, which …