Poisoning web-scale training datasets is practical

N Carlini, M Jagielski… - … IEEE Symposium on …, 2024 - ieeexplore.ieee.org
Deep learning models are often trained on distributed, web-scale datasets crawled from the
internet. In this paper, we introduce two new dataset poisoning attacks that intentionally …

Backdoor learning: A survey

Y Li, Y Jiang, Z Li, ST Xia - IEEE Transactions on Neural …, 2022 - ieeexplore.ieee.org
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …

Label poisoning is all you need

R Jha, J Hayase, S Oh - Advances in Neural Information …, 2023 - proceedings.neurips.cc
In a backdoor attack, an adversary injects corrupted data into a model's training dataset in
order to gain control over its predictions on images with a specific attacker-defined trigger. A …

Backdoor learning for nlp: Recent advances, challenges, and future research directions

M Omar - arXiv preprint arXiv:2302.06801, 2023 - arxiv.org
Although backdoor learning is an active research topic in the NLP domain, the literature
lacks studies that systematically categorize and summarize backdoor attacks and defenses …

Sslguard: A watermarking scheme for self-supervised learning pre-trained encoders

T Cong, X He, Y Zhang - Proceedings of the 2022 ACM SIGSAC …, 2022 - dl.acm.org
Self-supervised learning is an emerging machine learning (ML) paradigm. Compared to
supervised learning which leverages high-quality labeled datasets, self-supervised learning …

Are you stealing my model? sample correlation for fingerprinting deep neural networks

J Guan, J Liang, R He - Advances in Neural Information …, 2022 - proceedings.neurips.cc
An off-the-shelf model as a commercial service could be stolen by model stealing attacks,
posing great threats to the rights of the model owner. Model fingerprinting aims to verify …

A Comprehensive Survey on Backdoor Attacks and their Defenses in Face Recognition Systems

Q Le Roux, E Bourbao, Y Teglia, K Kallas - IEEE Access, 2024 - ieeexplore.ieee.org
Deep learning has significantly transformed face recognition, enabling the deployment of
large-scale, state-of-the-art solutions worldwide. However, the widespread adoption of deep …

Pre-trained trojan attacks for visual recognition

A Liu, X Zhang, Y Xiao, Y Zhou, S Liang… - arXiv preprint arXiv …, 2023 - arxiv.org
Pre-trained vision models (PVMs) have become a dominant component due to their
exceptional performance when fine-tuned for downstream tasks. However, the presence of …

Physical backdoor attacks to lane detection systems in autonomous driving

X Han, G Xu, Y Zhou, X Yang, J Li… - Proceedings of the 30th …, 2022 - dl.acm.org
Modern autonomous vehicles adopt state-of-the-art DNN models to interpret the sensor data
and perceive the environment. However, DNN models are vulnerable to different types of …

Redeem myself: Purifying backdoors in deep learning models using self attention distillation

X Gong, Y Chen, W Yang, Q Wang, Y Gu… - … IEEE Symposium on …, 2023 - ieeexplore.ieee.org
Recent works have revealed the vulnerability of deep neural networks to backdoor attacks,
where a backdoored model orchestrates targeted or untargeted misclassification when …