Recently, various parameter-efficient fine-tuning (PEFT) strategies for application to language models have been proposed and successfully implemented. However, this raises …
B Wu, S Wei, M Zhu, M Zheng, Z Zhu, M Zhang… - arXiv preprint arXiv …, 2023 - arxiv.org
Adversarial phenomenon has been widely observed in machine learning (ML) systems, especially in those using deep neural networks, describing that ML systems may produce …
C Chen, H Hong, T Xiang, M Xie - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Recent research findings suggest that machine learning models are highly susceptible to backdoor poisoning attacks. Backdoor poisoning attacks can be easily executed and …
Z Guan, M Hu, S Li, A Vullikanti - arXiv preprint arXiv:2404.01101, 2024 - arxiv.org
Diffusion Models are vulnerable to backdoor attacks, where malicious attackers inject backdoors by poisoning some parts of the training samples during the training stage. This …
Recent studies have revealed the vulnerability of deep neural networks (DNNs) to various backdoor attacks, where the behavior of DNNs can be compromised by utilizing certain …
Studies on backdoor attacks in recent years suggest that an adversary can compromise the integrity of a deep neural network (DNN) by manipulating a small set of training samples …
W Guo, B Tondi, M Barni - IEEE Transactions on Information …, 2023 - ieeexplore.ieee.org
We propose a Universal Defence against backdoor attacks based on Clustering and Centroids Analysis (CCA-UD). The goal of the defence is to reveal whether a Deep Neural …
K Gao, J Bai, B Chen, D Wu, ST Xia - arXiv preprint arXiv:2109.08868, 2021 - arxiv.org
A backdoored deep hashing model is expected to behave normally on original query images and return the images with the target label when a specific trigger pattern presents …
Z Huang, N Gong, MK Reiter - arXiv preprint arXiv:2312.01281, 2023 - arxiv.org
Untrusted data used to train a model might have been manipulated to endow the learned model with hidden properties that the data contributor might later exploit. Data purification …