Recent research shows deep neural networks are vulnerable to different types of attacks, such as adversarial attacks, data poisoning attacks, and backdoor attacks. Among them …
Y Li, Y Li, B Wu, L Li, R He… - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
Recently, backdoor attacks pose a new security threat to the training process of deep neural networks (DNNs). Attackers intend to inject hidden backdoors into DNNs, such that the …
K Doan, Y Lao, W Zhao, P Li - Proceedings of the IEEE/CVF …, 2021 - openaccess.thecvf.com
Recently, machine learning models have demonstrated to be vulnerable to backdoor attacks, primarily due to the lack of transparency in black-box models such as deep neural …
Backdoor attacks are rapidly emerging threats to deep neural networks (DNNs). In the backdoor attack scenario, attackers usually implant the backdoor into the target model by …
Recent works have demonstrated that deep learning models are vulnerable to backdoor poisoning attacks, where these attacks instill spurious correlations to external trigger …
Backdoor attacks have been shown to be a serious threat against deep learning systems such as biometric authentication and autonomous driving. An effective backdoor attack …
Z Zhao, X Chen, Y Xuan, Y Dong… - Proceedings of the …, 2022 - openaccess.thecvf.com
Backdoor attack is a type of serious security threat to deep learning models. An adversary can provide users with a model trained on poisoned data to manipulate prediction behavior …
Although deep neural networks (DNNs) have achieved a great success in various computer vision tasks, it is recently found that they are vulnerable to adversarial attacks. In this paper …
With the success of deep learning algorithms in various domains, studying adversarial attacks to secure deep models in real world applications has become an important research …