Z Liu, H Ye, C Chen, KY Lam - arXiv preprint arXiv:2403.13682, 2024 - arxiv.org
Recently, Machine Unlearning (MU) has gained considerable attention for its potential to improve AI safety by removing the influence of specific data from trained Machine Learning …
While existing backdoor attacks have successfully infected multimodal contrastive learning models such as CLIP they can be easily countered by specialized backdoor defenses for …
Abstract Contrastive Vision-Language Pre-training known as CLIP has shown promising effectiveness in addressing downstream image recognition tasks. However recent works …
Machine unlearning (MU) is gaining increasing attention due to the need to remove or modify predictions made by machine learning (ML) models. While training models have …
Machine Unlearning, a pivotal field addressing data privacy in machine learning, necessitates efficient methods for the removal of private or irrelevant data. In this context …
B Wu, S Wei, M Zhu, M Zheng, Z Zhu, M Zhang… - arXiv preprint arXiv …, 2023 - arxiv.org
Adversarial phenomenon has been widely observed in machine learning (ML) systems, especially in those using deep neural networks, describing that ML systems may produce …
W Yang, J Gao… - Advances in Neural …, 2024 - proceedings.neurips.cc
Contrastive vision-language representation learning has achieved state-of-the-art performance for zero-shot classification, by learning from millions of image-caption pairs …
J Liang, S Liang, M Luo, A Liu, D Han… - arXiv preprint arXiv …, 2024 - arxiv.org
Autoregressive Visual Language Models (VLMs) showcase impressive few-shot learning capabilities in a multimodal context. Recently, multimodal instruction tuning has been …
Recent advancements in deep learning have shown that multimodal inference can be particularly useful in tasks like autonomous driving, human health, and production line …