A comprehensive survey on poisoning attacks and countermeasures in machine learning

Z Tian, L Cui, J Liang, S Yu - ACM Computing Surveys, 2022 - dl.acm.org
The prosperity of machine learning has been accompanied by increasing attacks on the
training process. Among them, poisoning attacks have become an emerging threat during …

Machine learning for anomaly detection: A systematic review

AB Nassif, MA Talib, Q Nasir, FM Dakalbab - Ieee Access, 2021 - ieeexplore.ieee.org
Anomaly detection has been used for decades to identify and extract anomalous
components from data. Many techniques have been used to detect anomalies. One of the …

Fltrust: Byzantine-robust federated learning via trust bootstrapping

X Cao, M Fang, J Liu, NZ Gong - arXiv preprint arXiv:2012.13995, 2020 - arxiv.org
Byzantine-robust federated learning aims to enable a service provider to learn an accurate
global model when a bounded number of clients are malicious. The key idea of existing …

Data poisoning attacks against federated learning systems

V Tolpegin, S Truex, ME Gursoy, L Liu - … 14–18, 2020, Proceedings, Part I …, 2020 - Springer
Federated learning (FL) is an emerging paradigm for distributed training of large-scale deep
neural networks in which participants' data remains on their own devices with only model …

Attack of the tails: Yes, you really can backdoor federated learning

H Wang, K Sreenivasan, S Rajput… - Advances in …, 2020 - proceedings.neurips.cc
Due to its decentralized nature, Federated Learning (FL) lends itself to adversarial attacks in
the form of backdoors during training. The goal of a backdoor is to corrupt the performance …

Local model poisoning attacks to {Byzantine-Robust} federated learning

M Fang, X Cao, J Jia, N Gong - 29th USENIX security symposium …, 2020 - usenix.org
In federated learning, multiple client devices jointly learn a machine learning model: each
client device maintains a local model for its local training dataset, while a master device …

Privacy and robustness in federated learning: Attacks and defenses

L Lyu, H Yu, X Ma, C Chen, L Sun… - IEEE transactions on …, 2022 - ieeexplore.ieee.org
As data are increasingly being stored in different silos and societies becoming more aware
of data privacy issues, the traditional centralized training of artificial intelligence (AI) models …

Machine unlearning

L Bourtoule, V Chandrasekaran… - … IEEE Symposium on …, 2021 - ieeexplore.ieee.org
Once users have shared their data online, it is generally difficult for them to revoke access
and ask for the data to be deleted. Machine learning (ML) exacerbates this problem because …

Neural cleanse: Identifying and mitigating backdoor attacks in neural networks

B Wang, Y Yao, S Shan, H Li… - … IEEE Symposium on …, 2019 - ieeexplore.ieee.org
Lack of transparency in deep neural networks (DNNs) make them susceptible to backdoor
attacks, where hidden associations or triggers override normal classification to produce …

Wild patterns: Ten years after the rise of adversarial machine learning

B Biggio, F Roli - Proceedings of the 2018 ACM SIGSAC Conference on …, 2018 - dl.acm.org
Deep neural networks and machine-learning algorithms are pervasively used in several
applications, ranging from computer vision to computer security. In most of these …