Scale-up: An efficient black-box input-level backdoor detection via analyzing scaled prediction consistency

J Guo, Y Li, X Chen, H Guo, L Sun, C Liu - arXiv preprint arXiv:2302.03251, 2023 - arxiv.org
Deep neural networks (DNNs) are vulnerable to backdoor attacks, where adversaries
embed a hidden backdoor trigger during the training process for malicious prediction …

Black-box detection of backdoor attacks with limited information and data

Y Dong, X Yang, Z Deng, T Pang… - Proceedings of the …, 2021 - openaccess.thecvf.com
Although deep neural networks (DNNs) have made rapid progress in recent years, they are
vulnerable in adversarial environments. A malicious backdoor could be embedded in a …

Aeva: Black-box backdoor detection using adversarial extreme value analysis

J Guo, A Li, C Liu - arXiv preprint arXiv:2110.14880, 2021 - arxiv.org
Deep neural networks (DNNs) are proved to be vulnerable against backdoor attacks. A
backdoor is often embedded in the target DNNs through injecting a backdoor trigger into …

Ntd: Non-transferability enabled deep learning backdoor detection

Y Li, H Ma, Z Zhang, Y Gao, A Abuadbba… - IEEE Transactions …, 2023 - ieeexplore.ieee.org
To mitigate recent insidious backdoor attacks on deep learning models, advances have
been made by the research community. Nonetheless, state-of-the-art defenses are either …

Backdoorbox: A python toolbox for backdoor learning

Y Li, M Ya, Y Bai, Y Jiang, ST Xia - arXiv preprint arXiv:2302.01762, 2023 - arxiv.org
Third-party resources ($ eg $, samples, backbones, and pre-trained models) are usually
involved in the training of deep neural networks (DNNs), which brings backdoor attacks as a …

Backdoor defense via decoupling the training process

K Huang, Y Li, B Wu, Z Qin, K Ren - arXiv preprint arXiv:2202.03423, 2022 - arxiv.org
Recent studies have revealed that deep neural networks (DNNs) are vulnerable to backdoor
attacks, where attackers embed hidden backdoors in the DNN model by poisoning a few …

Just rotate it: Deploying backdoor attacks via rotation transformation

T Wu, T Wang, V Sehwag, S Mahloujifar… - Proceedings of the 15th …, 2022 - dl.acm.org
Recent works have demonstrated that deep learning models are vulnerable to backdoor
poisoning attacks, where these attacks instill spurious correlations to external trigger …

Backdoor learning: A survey

Y Li, Y Jiang, Z Li, ST Xia - IEEE Transactions on Neural …, 2022 - ieeexplore.ieee.org
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …

Bypassing backdoor detection algorithms in deep learning

R Shokri - 2020 IEEE European Symposium on Security and …, 2020 - ieeexplore.ieee.org
Deep learning models are vulnerable to various adversarial manipulations of their training
data, parameters, and input sample. In particular, an adversary can modify the training data …

Backdoor defense via adaptively splitting poisoned dataset

K Gao, Y Bai, J Gu, Y Yang… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Backdoor defenses have been studied to alleviate the threat of deep neural networks
(DNNs) being backdoor attacked and thus maliciously altered. Since DNNs usually adopt …