Towards faithful xai evaluation via generalization-limited backdoor watermark

M Ya, Y Li, T Dai, B Wang, Y Jiang… - The Twelfth International …, 2023 - openreview.net
Saliency-based representation visualization (SRV)($ eg $, Grad-CAM) is one of the most
classical and widely adopted explainable artificial intelligence (XAI) methods for its simplicity …

M-to-n backdoor paradigm: A multi-trigger and multi-target attack to deep learning models

L Hou, Z Hua, Y Li, Y Zheng… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Deep neural networks (DNNs) are vulnerable to backdoor attacks, where a backdoored
model behaves normally with clean inputs but exhibits attacker-specified behaviors upon the …

AI-based anomaly identification techniques for vehicles communication protocol systems: Comprehensive investigation, research opportunities and challenges

H Ahmad, MM Gulzar, S Aziz, S Habib, I Ahmed - Internet of Things, 2024 - Elsevier
Abstract The use of Controller Area Network in advanced automobiles as a communication
technology is becoming more common. However, there is a lack of adequate privacy …

A clean-label graph backdoor attack method in node classification task

X Xing, M Xu, Y Bai, D Yang - Knowledge-Based Systems, 2024 - Elsevier
Backdoor attacks in the traditional graph neural networks (GNNs) field are easily detectable
due to the dilemma of confusing labels. To explore the backdoor vulnerability of GNNs and …

B3: Backdoor Attacks against Black-box Machine Learning Models

X Gong, Y Chen, W Yang, H Huang… - ACM Transactions on …, 2023 - dl.acm.org
Backdoor attacks aim to inject backdoors to victim machine learning models during training
time, such that the backdoored model maintains the prediction power of the original model …

IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling Consistency

L Hou, R Feng, Z Hua, W Luo, LY Zhang… - arXiv preprint arXiv …, 2024 - arxiv.org
Deep neural networks (DNNs) are vulnerable to backdoor attacks, where adversaries can
maliciously trigger model misclassifications by implanting a hidden backdoor during model …

Invisible backdoor attack with attention and steganography

W Chen, X Xu, X Wang, H Zhou, Z Li, Y Chen - Computer Vision and Image …, 2024 - Elsevier
Recently, with the development and widespread application of deep neural networks
(DNNs), backdoor attacks have posed new security threats to the training process of DNNs …

ARTEMIS: Defending against Backdoor Attacks via Distribution Shift

M Xue, Z Wang, Q Zhang, X Gong… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Backdoor attacks can exploit vulnerabilities in the training process of Deep Neural Networks
(DNNs), introducing hidden malicious functionality that can be activated by a specific input …

Palette: Physically-Realizable Backdoor Attacks Against Video Recognition Models

X Gong, Z Fang, B Li, T Wang, Y Chen… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Backdoor attacks have been widely studied for image classification tasks, but rarely
investigated for video recognition tasks. In this paper, we explore the possibility of physically …

On the Credibility of Backdoor Attacks Against Object Detectors in the Physical World

BG Doan, DQ Nguyen, C Lindquist, P Montague… - arXiv preprint arXiv …, 2024 - arxiv.org
Object detectors are vulnerable to backdoor attacks. In contrast to classifiers, detectors
possess unique characteristics, architecturally and in task execution; often operating in …