[PDF][PDF] BAIT: Large Language Model Backdoor Scanning by Inverting Attack Target

G Shen, S Cheng, Z Zhang, G Tao, K Zhang… - 2025 IEEE Symposium …, 2024 - cs.purdue.edu
Recent literature has shown that LLMs are vulnerable to backdoor attacks, where malicious
attackers inject a secret token sequence (ie, trigger) into training prompts and enforce their …

UNIT: Backdoor Mitigation via Automated Neural Distribution Tightening

S Cheng, G Shen, K Zhang, G Tao, S An, H Guo… - … on Computer Vision, 2024 - Springer
Deep neural networks (DNNs) have demonstrated effectiveness in various fields. However,
DNNs are vulnerable to backdoor attacks, which inject a unique pattern, called trigger, into …

[PDF][PDF] Exploring the Orthogonality and Linearity of Backdoor Attacks

K Zhang, S Cheng, G Shen, G Tao, S An… - … IEEE Symposium on …, 2024 - kaiyuanzhang.com
Backdoor attacks embed an attacker-chosen pattern into inputs to cause model
misclassification. This security threat to machine learning has been a long concern. There …

DataStealing: Steal Data from Diffusion Models in Federated Learning with Multiple Trojans

Y Gan, J Miao, Y Yang - The Thirty-eighth Annual Conference on Neural … - openreview.net
Federated Learning (FL) is commonly used to collaboratively train models with privacy
preservation. In this paper, we found out that the popular diffusion models have introduced a …