ShieldFL: Mitigating model poisoning attacks in privacy-preserving federated learning

Z Ma, J Ma, Y Miao, Y Li… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Privacy-Preserving Federated Learning (PPFL) is an emerging secure distributed learning
paradigm that aggregates user-trained local gradients into a federated model through a …

Privacy-enhanced federated learning against poisoning adversaries

X Liu, H Li, G Xu, Z Chen, X Huang… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
Federated learning (FL), as a distributed machine learning setting, has received
considerable attention in recent years. To alleviate privacy concerns, FL essentially …

A robust privacy-preserving federated learning model against model poisoning attacks

A Yazdinejad, A Dehghantanha… - IEEE Transactions …, 2024 - ieeexplore.ieee.org
Although federated learning offers a level of privacy by aggregating user data without direct
access, it remains inherently vulnerable to various attacks, including poisoning attacks …

Flip: A provable defense framework for backdoor mitigation in federated learning

K Zhang, G Tao, Q Xu, S Cheng, S An, Y Liu… - arXiv preprint arXiv …, 2022 - arxiv.org
Federated Learning (FL) is a distributed learning paradigm that enables different parties to
train a model together for high quality and strong privacy protection. In this scenario …

Privacy-preserving Byzantine-robust federated learning via blockchain systems

Y Miao, Z Liu, H Li, KKR Choo… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Federated learning enables clients to train a machine learning model jointly without sharing
their local data. However, due to the centrality of federated learning framework and the …

Romoa: Ro bust mo del a ggregation for the resistance of federated learning to model poisoning attacks

Y Mao, X Yuan, X Zhao, S Zhong - … , October 4–8, 2021, Proceedings, Part …, 2021 - Springer
Training a deep neural network requires substantial data and intensive computing
resources. Unaffordable price holds back many potential applications of deep learning …

Fedrecover: Recovering from poisoning attacks in federated learning using historical information

X Cao, J Jia, Z Zhang, NZ Gong - 2023 IEEE Symposium on …, 2023 - ieeexplore.ieee.org
Federated learning is vulnerable to poisoning attacks in which malicious clients poison the
global model via sending malicious model updates to the server. Existing defenses focus on …

TEAR: Exploring temporal evolution of adversarial robustness for membership inference attacks against federated learning

G Liu, Z Tian, J Chen, C Wang… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Federated learning (FL) is a privacy-preserving machine learning paradigm that enables
multiple clients to train a unified model without disclosing their private data. However …

PVD-FL: A privacy-preserving and verifiable decentralized federated learning framework

J Zhao, H Zhu, F Wang, R Lu, Z Liu… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Over the past years, the increasingly severe data island problem has spawned an emerging
distributed deep learning framework—federated learning, in which the global model can be …

Practical private aggregation in federated learning against inference attack

P Zhao, Z Cao, J Jiang, F Gao - IEEE Internet of Things Journal, 2022 - ieeexplore.ieee.org
Federated learning (FL) enables multiple worker devices share local models trained on their
private data to collaboratively train a machine learning model. However, local models are …