Byzantine machine learning: A primer

R Guerraoui, N Gupta, R Pinot - ACM Computing Surveys, 2024 - dl.acm.org
The problem of Byzantine resilience in distributed machine learning, aka Byzantine machine
learning, consists of designing distributed algorithms that can train an accurate model …

Fltrust: Byzantine-robust federated learning via trust bootstrapping

X Cao, M Fang, J Liu, NZ Gong - arXiv preprint arXiv:2012.13995, 2020 - arxiv.org
Byzantine-robust federated learning aims to enable a service provider to learn an accurate
global model when a bounded number of clients are malicious. The key idea of existing …

Data and model poisoning backdoor attacks on wireless federated learning, and the defense mechanisms: A comprehensive survey

Y Wan, Y Qu, W Ni, Y Xiang, L Gao… - … Surveys & Tutorials, 2024 - ieeexplore.ieee.org
Due to the greatly improved capabilities of devices, massive data, and increasing concern
about data privacy, Federated Learning (FL) has been increasingly considered for …

Attack of the tails: Yes, you really can backdoor federated learning

H Wang, K Sreenivasan, S Rajput… - Advances in …, 2020 - proceedings.neurips.cc
Due to its decentralized nature, Federated Learning (FL) lends itself to adversarial attacks in
the form of backdoors during training. The goal of a backdoor is to corrupt the performance …

Advances and open problems in federated learning

P Kairouz, HB McMahan, B Avent… - … and trends® in …, 2021 - nowpublishers.com
Federated learning (FL) is a machine learning setting where many clients (eg, mobile
devices or whole organizations) collaboratively train a model under the orchestration of a …

Privacy-enhanced federated learning against poisoning adversaries

X Liu, H Li, G Xu, Z Chen, X Huang… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
Federated learning (FL), as a distributed machine learning setting, has received
considerable attention in recent years. To alleviate privacy concerns, FL essentially …

Mpaf: Model poisoning attacks to federated learning based on fake clients

X Cao, NZ Gong - … of the IEEE/CVF Conference on …, 2022 - openaccess.thecvf.com
Existing model poisoning attacks to federated learning assume that an attacker has access
to a large fraction of compromised genuine clients. However, such assumption is not realistic …

Salvaging federated learning by local adaptation

T Yu, E Bagdasaryan, V Shmatikov - arXiv preprint arXiv:2002.04758, 2020 - arxiv.org
Federated learning (FL) is a heavily promoted approach for training ML models on sensitive
data, eg, text typed by users on their smartphones. FL is expressly designed for training on …

Byzantine machine learning made easy by resilient averaging of momentums

S Farhadkhani, R Guerraoui, N Gupta… - International …, 2022 - proceedings.mlr.press
Byzantine resilience emerged as a prominent topic within the distributed machine learning
community. Essentially, the goal is to enhance distributed optimization algorithms, such as …

Provably secure federated learning against malicious clients

X Cao, J Jia, NZ Gong - Proceedings of the AAAI conference on artificial …, 2021 - ojs.aaai.org
Federated learning enables clients to collaboratively learn a shared global model without
sharing their local training data with a cloud server. However, malicious clients can corrupt …