A robust privacy-preserving federated learning model against model poisoning attacks

A Yazdinejad, A Dehghantanha… - IEEE Transactions …, 2024 - ieeexplore.ieee.org
Although federated learning offers a level of privacy by aggregating user data without direct
access, it remains inherently vulnerable to various attacks, including poisoning attacks …

Apfed: Anti-poisoning attacks in privacy-preserving heterogeneous federated learning

X Chen, H Yu, X Jia, X Yu - IEEE Transactions on Information …, 2023 - ieeexplore.ieee.org
Federated learning (FL) is an emerging paradigm of privacy-preserving distributed machine
learning that effectively deals with the privacy leakage problem by utilizing cryptographic …

Privacy-enhanced federated learning against poisoning adversaries

X Liu, H Li, G Xu, Z Chen, X Huang… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
Federated learning (FL), as a distributed machine learning setting, has received
considerable attention in recent years. To alleviate privacy concerns, FL essentially …

ShieldFL: Mitigating model poisoning attacks in privacy-preserving federated learning

Z Ma, J Ma, Y Miao, Y Li… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Privacy-Preserving Federated Learning (PPFL) is an emerging secure distributed learning
paradigm that aggregates user-trained local gradients into a federated model through a …

Two-level privacy-preserving framework: Federated learning for attack detection in the consumer internet of things

E Rabieinejad, A Yazdinejad… - IEEE Transactions …, 2024 - ieeexplore.ieee.org
As the adoption of Consumer Internet of Things (CIoT) devices surges, so do concerns about
security vulnerabilities and privacy breaches. Given their integration into daily life and data …

Information leakage by model weights on federated learning

X Xu, J Wu, M Yang, T Luo, X Duan, W Li… - Proceedings of the …, 2020 - dl.acm.org
Federated learning aggregates data from multiple sources while protecting privacy, which
makes it possible to train efficient models in real scenes. However, although federated …

Comments on “privacy-enhanced federated learning against poisoning adversaries”

T Schneider, A Suresh, H Yalame - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Liu et al.(2021) recently proposed a privacy-enhanced framework named PEFL to efficiently
detect poisoning behaviours in Federated Learning (FL) using homomorphic encryption. In …

Depriving the Survival Space of Adversaries Against Poisoned Gradients in Federated Learning

J Lu, S Hu, W Wan, M Li, LY Zhang… - IEEE Transactions …, 2024 - ieeexplore.ieee.org
Federated learning (FL) allows clients at the edge to learn a shared global model without
disclosing their private data. However, FL is susceptible to poisoning attacks, wherein an …

MUD-PQFed: Towards Malicious User Detection on model corruption in Privacy-preserving Quantized Federated learning

H Ma, Q Li, Y Zheng, Z Zhang, X Liu, Y Gao… - Computers & …, 2023 - Elsevier
The use of cryptographic privacy-preserving techniques in Federated Learning (FL)
inadvertently induces a security dilemma because tampered local model parameters are …

BPFL: Blockchain-based privacy-preserving federated learning against poisoning attack

Y Ren, M Hu, Z Yang, G Feng, X Zhang - Information Sciences, 2024 - Elsevier
In federated learning (FL), multiple clients use local datasets to train models and submit
local gradients to the server for aggregation. However, malicious clients may compromise …