Nearest is not dearest: Towards practical defense against quantization-conditioned backdoor attacks

B Li, Y Cai, H Li, F Xue, Z Li… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Abstract Model quantization is widely used to compress and accelerate deep neural
networks. However recent studies have revealed the feasibility of weaponizing model …

Beyond traditional threats: A persistent backdoor attack on federated learning

T Liu, Y Zhang, Z Feng, Z Yang, C Xu, D Man… - Proceedings of the …, 2024 - ojs.aaai.org
Backdoors on federated learning will be diluted by subsequent benign updates. This is
reflected in the significant reduction of attack success rate as iterations increase, ultimately …

Improving explainable ai with patch perturbation-based evaluation pipeline: a covid-19 x-ray image analysis case study

J Sun, W Shi, FO Giuste, YS Vaghani, L Tang… - Scientific Reports, 2023 - nature.com
Recent advances in artificial intelligence (AI) have sparked interest in developing
explainable AI (XAI) methods for clinical decision support systems, especially in translational …

{UBA-Inf}: Unlearning Activated Backdoor Attack with {Influence-Driven} Camouflage

Z Huang, Y Mao, S Zhong - 33rd USENIX Security Symposium (USENIX …, 2024 - usenix.org
Machine-Learning-as-a-Service (MLaaS) is an emerging product to meet the market
demand. However, end users are required to upload data to the remote server when using …

Badrl: Sparse targeted backdoor attack against reinforcement learning

J Cui, Y Han, Y Ma, J Jiao, J Zhang - … of the AAAI Conference on Artificial …, 2024 - ojs.aaai.org
Backdoor attacks in reinforcement learning (RL) have previously employed intense attack
strategies to ensure attack success. However, these methods suffer from high attack costs …

Attacks in adversarial machine learning: A systematic survey from the life-cycle perspective

B Wu, Z Zhu, L Liu, Q Liu, Z He, S Lyu - arXiv preprint arXiv:2302.09457, 2023 - arxiv.org
Adversarial machine learning (AML) studies the adversarial phenomenon of machine
learning, which may make inconsistent or unexpected predictions with humans. Some …

Query-Efficient Model Inversion Attacks: An Information Flow View

Y Xu, B Fang, M Li, X Liu, Z Tian - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Model Inversion Attacks (MIAs) pose a certain threat to the data privacy of learning-based
systems, as they enable adversaries to reconstruct identifiable features of the training …

Conditional Backdoor Attack via JPEG Compression

Q Duan, Z Hua, Q Liao, Y Zhang… - Proceedings of the AAAI …, 2024 - ojs.aaai.org
Deep neural network (DNN) models have been proven vulnerable to backdoor attacks. One
trend of backdoor attacks is developing more invisible and dynamic triggers to make attacks …

Backdoor Online Tracing With Evolving Graphs

C Jia, J Chen, S Ji, Y Cheng, H Zheng… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
The backdoor attacks have posed a severe threat to deep neural networks (DNNs). Online
training platforms and third-party model training providers are more vulnerable to backdoor …

Repairing Backdoor Model With Dynamic Gradient Clipping for Intelligent Vehicles

X Ma, X Li, J Zhang, Z Ma, Q Jiang… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
The backdoor attack has emerged as a prevalent threat that affects the effectiveness of
machine learning models in intelligent vehicles. While such attacks may not impair the …