Towards faithful xai evaluation via generalization-limited backdoor watermark

M Ya, Y Li, T Dai, B Wang, Y Jiang… - The Twelfth International …, 2023 - openreview.net
Saliency-based representation visualization (SRV)($ eg $, Grad-CAM) is one of the most
classical and widely adopted explainable artificial intelligence (XAI) methods for its simplicity …

AI-based anomaly identification techniques for vehicles communication protocol systems: Comprehensive investigation, research opportunities and challenges

H Ahmad, MM Gulzar, S Aziz, S Habib, I Ahmed - Internet of Things, 2024 - Elsevier
Abstract The use of Controller Area Network in advanced automobiles as a communication
technology is becoming more common. However, there is a lack of adequate privacy …

B3: Backdoor Attacks against Black-box Machine Learning Models

X Gong, Y Chen, W Yang, H Huang… - ACM Transactions on …, 2023 - dl.acm.org
Backdoor attacks aim to inject backdoors to victim machine learning models during training
time, such that the backdoored model maintains the prediction power of the original model …

IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling Consistency

L Hou, R Feng, Z Hua, W Luo, LY Zhang… - arXiv preprint arXiv …, 2024 - arxiv.org
Deep neural networks (DNNs) are vulnerable to backdoor attacks, where adversaries can
maliciously trigger model misclassifications by implanting a hidden backdoor during model …

M-to-N Backdoor Paradigm: A Multi-Trigger and Multi-Target Attack to Deep Learning Models

L Hou, Z Hua, Y Li, Y Zheng… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Deep neural networks (DNNs) are vulnerable to backdoor attacks, where a backdoored
model behaves normally with clean inputs but exhibits attacker-specified behaviors upon the …

Backdoor Attack with Sparse and Invisible Trigger

Y Gao, Y Li, X Gong, Z Li, ST Xia… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Deep neural networks (DNNs) are vulnerable to backdoor attacks, where the adversary
manipulates a small portion of training data such that the victim model predicts normally on …

A clean-label graph backdoor attack method in node classification task

X Xing, M Xu, Y Bai, D Yang - arXiv preprint arXiv:2401.00163, 2023 - arxiv.org
Backdoor attacks in the traditional graph neural networks (GNNs) field are easily detectable
due to the dilemma of confusing labels. To explore the backdoor vulnerability of GNNs and …

Palette: Physically-Realizable Backdoor Attacks Against Video Recognition Models

X Gong, Z Fang, B Li, T Wang, Y Chen… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Backdoor attacks have been widely studied for image classification tasks, but rarely
investigated for video recognition tasks. In this paper, we explore the possibility of physically …

Imperceptible and multi-channel backdoor attack

M Xue, S Ni, Y Wu, Y Zhang, W Liu - Applied Intelligence, 2024 - Springer
Recent researches demonstrate that Deep Neural Networks (DNN) models are vulnerable to
backdoor attacks. The backdoored DNN model will behave maliciously when images …

Automated segmentation to make hidden trigger backdoor attacks robust against deep neural networks

S Ali, S Ashraf, MS Yousaf, S Riaz, G Wang - Applied Sciences, 2023 - mdpi.com
The successful outcomes of deep learning (DL) algorithms in diverse fields have prompted
researchers to consider backdoor attacks on DL models to defend them in practical …