Backdoor defense via deconfounded representation learning

Z Zhang, Q Liu, Z Wang, Z Lu… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Deep neural networks (DNNs) are recently shown to be vulnerable to backdoor attacks,
where attackers embed hidden backdoors in the DNN model by injecting a few poisoned …

Backdoor defense via decoupling the training process

K Huang, Y Li, B Wu, Z Qin, K Ren - arXiv preprint arXiv:2202.03423, 2022 - arxiv.org
Recent studies have revealed that deep neural networks (DNNs) are vulnerable to backdoor
attacks, where attackers embed hidden backdoors in the DNN model by poisoning a few …

Beating backdoor attack at its own game

M Liu, A Sangiovanni-Vincentelli… - Proceedings of the …, 2023 - openaccess.thecvf.com
Deep neural networks (DNNs) are vulnerable to backdoor attack, which does not affect the
network's performance on clean data but would manipulate the network behavior once a …

Trap and replace: Defending backdoor attacks by trapping them into an easy-to-replace subnetwork

H Wang, J Hong, A Zhang, J Zhou… - Advances in neural …, 2022 - proceedings.neurips.cc
Deep neural networks (DNNs) are vulnerable to backdoor attacks. Previous works have
shown it extremely challenging to unlearn the undesired backdoor behavior from the …

Backdoor defense via adaptively splitting poisoned dataset

K Gao, Y Bai, J Gu, Y Yang… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Backdoor defenses have been studied to alleviate the threat of deep neural networks
(DNNs) being backdoor attacked and thus maliciously altered. Since DNNs usually adopt …

Black-box detection of backdoor attacks with limited information and data

Y Dong, X Yang, Z Deng, T Pang… - Proceedings of the …, 2021 - openaccess.thecvf.com
Although deep neural networks (DNNs) have made rapid progress in recent years, they are
vulnerable in adversarial environments. A malicious backdoor could be embedded in a …

Invisible backdoor attack with sample-specific triggers

Y Li, Y Li, B Wu, L Li, R He… - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
Recently, backdoor attacks pose a new security threat to the training process of deep neural
networks (DNNs). Attackers intend to inject hidden backdoors into DNNs, such that the …

Poison as a cure: Detecting & neutralizing variable-sized backdoor attacks in deep neural networks

A Chan, YS Ong - arXiv preprint arXiv:1911.08040, 2019 - arxiv.org
Deep learning models have recently shown to be vulnerable to backdoor poisoning, an
insidious attack where the victim model predicts clean images correctly but classifies the …

Progressive backdoor erasing via connecting backdoor and adversarial attacks

B Mu, Z Niu, L Wang, X Wang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Deep neural networks (DNNs) are known to be vulnerable to both backdoor attacks as well
as adversarial attacks. In the literature, these two types of attacks are commonly treated as …

Enhancing clean label backdoor attack with two-phase specific triggers

N Luo, Y Li, Y Wang, S Wu, Y Tan, Q Zhang - arXiv preprint arXiv …, 2022 - arxiv.org
Backdoor attacks threaten Deep Neural Networks (DNNs). Towards stealthiness,
researchers propose clean-label backdoor attacks, which require the adversaries not to alter …