A systematic survey of prompt engineering on vision-language foundation models

J Gu, Z Han, S Chen, A Beirami, B He, G Zhang… - arXiv preprint arXiv …, 2023 - arxiv.org
Prompt engineering is a technique that involves augmenting a large pre-trained model with
task-specific hints, known as prompts, to adapt the model to new tasks. Prompts can be …

Threats, attacks, and defenses in machine unlearning: A survey

Z Liu, H Ye, C Chen, KY Lam - arXiv preprint arXiv:2403.13682, 2024 - arxiv.org
Recently, Machine Unlearning (MU) has gained considerable attention for its potential to
improve AI safety by removing the influence of specific data from trained Machine Learning …

Badclip: Dual-embedding guided backdoor attack on multimodal contrastive learning

S Liang, M Zhu, A Liu, B Wu, X Cao… - Proceedings of the …, 2024 - openaccess.thecvf.com
While existing backdoor attacks have successfully infected multimodal contrastive learning
models such as CLIP they can be easily countered by specialized backdoor defenses for …

BadCLIP: Trigger-Aware Prompt Learning for Backdoor Attacks on CLIP

J Bai, K Gao, S Min, ST Xia, Z Li… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Abstract Contrastive Vision-Language Pre-training known as CLIP has shown promising
effectiveness in addressing downstream image recognition tasks. However recent works …

Exploring the landscape of machine unlearning: A comprehensive survey and taxonomy

T Shaik, X Tao, H Xie, L Li, X Zhu, Q Li - arXiv preprint arXiv:2305.06360, 2023 - arxiv.org
Machine unlearning (MU) is gaining increasing attention due to the need to remove or
modify predictions made by machine learning (ML) models. While training models have …

Framu: Attention-based machine unlearning using federated reinforcement learning

T Shaik, X Tao, L Li, H Xie, T Cai… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Machine Unlearning, a pivotal field addressing data privacy in machine learning,
necessitates efficient methods for the removal of private or irrelevant data. In this context …

Defenses in adversarial machine learning: A survey

B Wu, S Wei, M Zhu, M Zheng, Z Zhu, M Zhang… - arXiv preprint arXiv …, 2023 - arxiv.org
Adversarial phenomenon has been widely observed in machine learning (ML) systems,
especially in those using deep neural networks, describing that ML systems may produce …

Robust contrastive language-image pretraining against data poisoning and backdoor attacks

W Yang, J Gao… - Advances in Neural …, 2024 - proceedings.neurips.cc
Contrastive vision-language representation learning has achieved state-of-the-art
performance for zero-shot classification, by learning from millions of image-caption pairs …

Vl-trojan: Multimodal instruction backdoor attacks against autoregressive visual language models

J Liang, S Liang, M Luo, A Liu, D Han… - arXiv preprint arXiv …, 2024 - arxiv.org
Autoregressive Visual Language Models (VLMs) showcase impressive few-shot learning
capabilities in a multimodal context. Recently, multimodal instruction tuning has been …

PATCH: A Plug-in Framework of Non-blocking Inference for Distributed Multimodal System

J Wang, G Wang, X Zhang, L Liu, H Zeng… - Proceedings of the …, 2023 - dl.acm.org
Recent advancements in deep learning have shown that multimodal inference can be
particularly useful in tasks like autonomous driving, human health, and production line …