Privacy-enhanced federated learning against poisoning adversaries

X Liu, H Li, G Xu, Z Chen, X Huang… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
… In this paper, the parameters submitted by poisoning adversary could cause the model to
misclassify, degrading the accuracy of the trained model. Hence, a secure and robust FL needs …

Mitigating sybils in federated learning poisoning

C Fung, CJM Yoon, I Beschastnikh - arXiv preprint arXiv:1808.04866, 2018 - arxiv.org
… vulnerability of federated learning to sybil-based poisoning attacks… to this problem that identifies
poisoning sybils based on the … , targeted poisoning attacks are performed by adversaries

Analyzing federated learning through an adversarial lens

AN Bhagoji, S Chakraborty, P Mittal… - … on Machine Learning, 2019 - proceedings.mlr.press
… We design attacks on federated learning that ensure targeted poisoning of the global model
while ensuring convergence. Our threat model considers adversaries controlling a small …

Back to the drawing board: A critical evaluation of poisoning attacks on production federated learning

V Shejwalkar, A Houmansadr… - … IEEE Symposium on …, 2022 - ieeexplore.ieee.org
… , variations of poisoning, and adversary capabilities. We … threat models of poisoning attacks
on federated learning (FL), … of untargeted model and data poisoning attacks on FL (including …

Data poisoning attacks against federated learning systems

V Tolpegin, S Truex, ME Gursoy, L Liu - … 14–18, 2020, Proceedings, Part I …, 2020 - Springer
Federated learning (FL) is an emerging paradigm for distributed … In this paper, we study
targeted data poisoning attacks … We consider two scenarios in which the adversary is restricted …

Understanding distributed poisoning attack in federated learning

D Cao, S Chang, Z Lin, G Liu… - 2019 IEEE 25th …, 2019 - ieeexplore.ieee.org
Adversaries in collaborative learning and federated learning can be strong. They can …
limits the number of adversaries in the system is 1. Because the adversary only gets the global …

[PDF][PDF] Manipulating the byzantine: Optimizing model poisoning attacks and defenses for federated learning

V Shejwalkar, A Houmansadr - NDSS, 2021 - par.nsf.gov
… Unlike previous works [17], [4], [31], [37], we consider a comprehensive set of possible threat
models for model poisoning attacks along two dimensions of the adversary’s knowledge: …

Threats to federated learning: A survey

L Lyu, H Yu, Q Yang - arXiv preprint arXiv:2003.02133, 2020 - arxiv.org
… on FL systems: 1) poisoning attacks that attempt to prevent a model from being learned at
all, or to bias the model to produce inferences that are preferable to the adversary; and 2) …

ShieldFL: Mitigating model poisoning attacks in privacy-preserving federated learning

Z Ma, J Ma, Y Miao, Y Li… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
… trained local gradients into a federated model through a … poisoning attacks launched by a
Byzantine adversary, who crafts malicious local gradients to harm the accuracy of the federated

CONTRA: Defending Against Poisoning Attacks in Federated Learning

S Awan, B Luo, F Li - Computer Security–ESORICS 2021: 26th European …, 2021 - Springer
… We simulate two types of poisoning attacks: (1) Label-flipping attacks: the adversaries
attempt to flip a randomly selected source label (S) of the training samples to a target (adversarial) …