Color backdoor: A robust poisoning attack in color space

W Jiang, H Li, G Xu, T Zhang - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
Backdoor attacks against neural networks have been intensively investigated, where the
adversary compromises the integrity of the victim model, causing it to make wrong …

Poison ink: Robust and invisible backdoor attack

J Zhang, C Dongdong, Q Huang, J Liao… - … on Image Processing, 2022 - ieeexplore.ieee.org
Recent research shows deep neural networks are vulnerable to different types of attacks,
such as adversarial attacks, data poisoning attacks, and backdoor attacks. Among them …

Invisible backdoor attack with sample-specific triggers

Y Li, Y Li, B Wu, L Li, R He… - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
Recently, backdoor attacks pose a new security threat to the training process of deep neural
networks (DNNs). Attackers intend to inject hidden backdoors into DNNs, such that the …

Lira: Learnable, imperceptible and robust backdoor attacks

K Doan, Y Lao, W Zhao, P Li - Proceedings of the IEEE/CVF …, 2021 - openaccess.thecvf.com
Recently, machine learning models have demonstrated to be vulnerable to backdoor
attacks, primarily due to the lack of transparency in black-box models such as deep neural …

Imperceptible backdoor attack: From input space to feature representation

N Zhong, Z Qian, X Zhang - arXiv preprint arXiv:2205.03190, 2022 - arxiv.org
Backdoor attacks are rapidly emerging threats to deep neural networks (DNNs). In the
backdoor attack scenario, attackers usually implant the backdoor into the target model by …

Just rotate it: Deploying backdoor attacks via rotation transformation

T Wu, T Wang, V Sehwag, S Mahloujifar… - Proceedings of the 15th …, 2022 - dl.acm.org
Recent works have demonstrated that deep learning models are vulnerable to backdoor
poisoning attacks, where these attacks instill spurious correlations to external trigger …

An invisible black-box backdoor attack through frequency domain

T Wang, Y Yao, F Xu, S An, H Tong, T Wang - European Conference on …, 2022 - Springer
Backdoor attacks have been shown to be a serious threat against deep learning systems
such as biometric authentication and autonomous driving. An effective backdoor attack …

Defeat: Deep hidden feature backdoor attacks by imperceptible perturbation and latent representation constraints

Z Zhao, X Chen, Y Xuan, Y Dong… - Proceedings of the …, 2022 - openaccess.thecvf.com
Backdoor attack is a type of serious security threat to deep learning models. An adversary
can provide users with a model trained on poisoned data to manipulate prediction behavior …

Defending against backdoor attack on deep neural networks

K Xu, S Liu, PY Chen, P Zhao, X Lin - arXiv preprint arXiv:2002.12162, 2020 - arxiv.org
Although deep neural networks (DNNs) have achieved a great success in various computer
vision tasks, it is recently found that they are vulnerable to adversarial attacks. In this paper …

Hidden trigger backdoor attacks

A Saha, A Subramanya, H Pirsiavash - Proceedings of the AAAI …, 2020 - ojs.aaai.org
With the success of deep learning algorithms in various domains, studying adversarial
attacks to secure deep models in real world applications has become an important research …