Transformer models used for text-based question answering systems

K Nassiri, M Akhloufi - Applied Intelligence, 2023 - Springer
The question answering system is frequently applied in the area of natural language
processing (NLP) because of the wide variety of applications. It consists of answering …

Badclip: Dual-embedding guided backdoor attack on multimodal contrastive learning

S Liang, M Zhu, A Liu, B Wu, X Cao… - Proceedings of the …, 2024 - openaccess.thecvf.com
While existing backdoor attacks have successfully infected multimodal contrastive learning
models such as CLIP they can be easily countered by specialized backdoor defenses for …

[PDF][PDF] Zero-day backdoor attack against text-to-image diffusion models via personalization

Y Huang, Q Guo, F Juefei-Xu - arXiv preprint arXiv:2305.10701, 2023 - xujuefei.com
Although recent personalization methods have democratized high-resolution image
synthesis by enabling swift concept acquisition with minimal examples and lightweight …

Nearest is not dearest: Towards practical defense against quantization-conditioned backdoor attacks

B Li, Y Cai, H Li, F Xue, Z Li… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Abstract Model quantization is widely used to compress and accelerate deep neural
networks. However recent studies have revealed the feasibility of weaponizing model …

[PDF][PDF] Backdooring multimodal learning

X Han, Y Wu, Q Zhang, Y Zhou, Y Xu… - … IEEE Symposium on …, 2023 - tianweiz07.github.io
Deep Neural Networks (DNNs) are vulnerable to backdoor attacks, which poison the training
set to alter the model prediction over samples with a specific trigger. While existing efforts …

Tijo: Trigger inversion with joint optimization for defending multimodal backdoored models

I Sur, K Sikka, M Walmer… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract We present a Multimodal Backdoor defense technique TIJO (Trigger Inversion
using Joint Optimization). Recently Walmer et al. demonstrated successful backdoor attacks …

Test-time backdoor attacks on multimodal large language models

D Lu, T Pang, C Du, Q Liu, X Yang, M Lin - arXiv preprint arXiv …, 2024 - arxiv.org
Backdoor attacks are commonly executed by contaminating training data, such that a trigger
can activate predetermined harmful effects during the test phase. In this work, we present …

Defenses in adversarial machine learning: A survey

B Wu, S Wei, M Zhu, M Zheng, Z Zhu, M Zhang… - arXiv preprint arXiv …, 2023 - arxiv.org
Adversarial phenomenon has been widely observed in machine learning (ML) systems,
especially in those using deep neural networks, describing that ML systems may produce …

Composite backdoor attacks against large language models

H Huang, Z Zhao, M Backes, Y Shen… - arXiv preprint arXiv …, 2023 - arxiv.org
Large language models (LLMs) have demonstrated superior performance compared to
previous methods on various tasks, and often serve as the foundation models for many …

Logical implications for visual question answering consistency

S Tascon-Morales, P Márquez-Neila… - Proceedings of the …, 2023 - openaccess.thecvf.com
Despite considerable recent progress in Visual Question Answering (VQA) models,
inconsistent or contradictory answers continue to cast doubt on their true reasoning …