Backdoor attacks and countermeasures on deep learning: A comprehensive review

Y Gao, BG Doan, Z Zhang, S Ma, J Zhang, A Fu… - arXiv preprint arXiv …, 2020 - arxiv.org
This work provides the community with a timely comprehensive review of backdoor attacks
and countermeasures on deep learning. According to the attacker's capability and affected …

Host-based IDS: A review and open issues of an anomaly detection system in IoT

I Martins, JS Resende, PR Sousa, S Silva… - Future Generation …, 2022 - Elsevier
Abstract The Internet of Things (IoT) envisions a smart environment powered by connectivity
and heterogeneity where ensuring reliable services and communications across multiple …

Glaze: Protecting artists from style mimicry by {Text-to-Image} models

S Shan, J Cryan, E Wenger, H Zheng… - 32nd USENIX Security …, 2023 - usenix.org
Recent text-to-image diffusion models such as MidJourney and Stable Diffusion threaten to
displace many in the professional artist community. In particular, models can learn to mimic …

Backdoor learning: A survey

Y Li, Y Jiang, Z Li, ST Xia - IEEE Transactions on Neural …, 2022 - ieeexplore.ieee.org
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …

Hidden trigger backdoor attacks

A Saha, A Subramanya, H Pirsiavash - Proceedings of the AAAI …, 2020 - ojs.aaai.org
With the success of deep learning algorithms in various domains, studying adversarial
attacks to secure deep models in real world applications has become an important research …

Fawkes: Protecting privacy against unauthorized deep learning models

S Shan, E Wenger, J Zhang, H Li, H Zheng… - 29th USENIX security …, 2020 - usenix.org
Today's proliferation of powerful facial recognition systems poses a real threat to personal
privacy. As Clearview. ai demonstrated, anyone can canvas the Internet for data and train …

Invisible backdoor attacks on deep neural networks via steganography and regularization

S Li, M Xue, BZH Zhao, H Zhu… - IEEE Transactions on …, 2020 - ieeexplore.ieee.org
Deep neural networks (DNNs) have been proven vulnerable to backdoor attacks, where
hidden features (patterns) trained to a normal model, which is only activated by some …

“real attackers don't compute gradients”: bridging the gap between adversarial ml research and practice

G Apruzzese, HS Anderson, S Dambra… - … IEEE Conference on …, 2023 - ieeexplore.ieee.org
Recent years have seen a proliferation of research on adversarial machine learning.
Numerous papers demonstrate powerful algorithmic attacks against a wide variety of …

Universal litmus patterns: Revealing backdoor attacks in cnns

S Kolouri, A Saha, H Pirsiavash… - Proceedings of the …, 2020 - openaccess.thecvf.com
The unprecedented success of deep neural networks in many applications has made these
networks a prime target for adversarial exploitation. In this paper, we introduce a benchmark …

Hidden backdoors in human-centric language models

S Li, H Liu, T Dong, BZH Zhao, M Xue, H Zhu… - Proceedings of the 2021 …, 2021 - dl.acm.org
Natural language processing (NLP) systems have been proven to be vulnerable to backdoor
attacks, whereby hidden features (backdoors) are trained into a language model and may …