B Li, Y Cai, H Li, F Xue, Z Li… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Abstract Model quantization is widely used to compress and accelerate deep neural networks. However recent studies have revealed the feasibility of weaponizing model …
Image generation techniques have been gaining increasing attention recently, but concerns have been raised about the potential misuse and intellectual property (IP) infringement …
Y Yang, Q Li, J Jia, Y Hong, B Wang - Proceedings of the 2024 on ACM …, 2024 - dl.acm.org
Federated graph learning (FedGL) is an emerging federated learning (FL) framework that extends FL to learn graph data from diverse sources without accessing the data. FL for non …
Object detection models are vulnerable to backdoor or trojan attacks, where an attacker can inject malicious triggers into the model, leading to altered behavior during inference. As a …
Y Du, S Zhao, J Cao, M Ma, D Zhao, F Fan… - arXiv preprint arXiv …, 2024 - arxiv.org
Instruction Fine-Tuning (IFT) has become an essential method for adapting base Large Language Models (LLMs) into variants for professional and private use. However …
Pre-trained large models for multimodal contrastive learning, such as CLIP, have been widely recognized in the industry as highly susceptible to data-poisoned backdoor attacks …
Contrastive learning (CL) pre-trains general-purpose encoders using an unlabeled pre- training dataset which consists of images or image-text pairs. CL is vulnerable to data …
H Wang, T Xiang, S Guo, J He, H Liu… - arXiv preprint arXiv …, 2024 - arxiv.org
Pre-trained models (PTMs) are extensively utilized in various downstream tasks. Adopting untrusted PTMs may suffer from backdoor attacks, where the adversary can compromise the …
Recent studies have revealed that GNNs are highly susceptible to multiple adversarial attacks. Among these, graph backdoor attacks pose one of the most prominent threats …