SynGhost: Imperceptible and Universal Task-agnostic Backdoor Attack in Pre-trained Language Models

P Cheng, W Du, Z Wu, F Zhang, L Chen… - arXiv preprint arXiv …, 2024 - arxiv.org
Pre-training has been a necessary phase for deploying pre-trained language models
(PLMs) to achieve remarkable performance in downstream tasks. However, we empirically …