Byzantine machine learning: A primer

R Guerraoui, N Gupta, R Pinot - ACM Computing Surveys, 2024 - dl.acm.org
The problem of Byzantine resilience in distributed machine learning, aka Byzantine machine
learning, consists of designing distributed algorithms that can train an accurate model …

Trusted AI in multiagent systems: An overview of privacy and security for distributed learning

C Ma, J Li, K Wei, B Liu, M Ding, L Yuan… - Proceedings of the …, 2023 - ieeexplore.ieee.org
Motivated by the advancing computational capacity of distributed end-user equipment (UE),
as well as the increasing concerns about sharing private data, there has been considerable …

Poisoning attacks in federated learning: A survey

G Xia, J Chen, C Yu, J Ma - IEEE Access, 2023 - ieeexplore.ieee.org
Federated learning faces many security and privacy issues. Among them, poisoning attacks
can significantly impact global models, and malicious attackers can prevent global models …

Backdoor defense via deconfounded representation learning

Z Zhang, Q Liu, Z Wang, Z Lu… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Deep neural networks (DNNs) are recently shown to be vulnerable to backdoor attacks,
where attackers embed hidden backdoors in the DNN model by injecting a few poisoned …

Federated Learning with Long-Tailed Data via Representation Unification and Classifier Rectification

W Huang, Y Liu, M Ye, J Chen… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Prevalent federated learning commonly develops under the assumption that the ideal global
class distributions are balanced. In contrast, real-world data typically follows the long-tailed …

Fltracer: Accurate poisoning attack provenance in federated learning

X Zhang, Q Liu, Z Ba, Y Hong, T Zheng… - IEEE Transactions …, 2024 - ieeexplore.ieee.org
Federated Learning (FL) is a promising distributed learning approach that enables multiple
clients to collaboratively train a shared global model. However, recent studies show that FL …

Poisoning federated recommender systems with fake users

M Yin, Y Xu, M Fang, NZ Gong - Proceedings of the ACM on Web …, 2024 - dl.acm.org
Federated recommendation is a prominent use case within federated learning, yet it remains
susceptible to various attacks, from user to server-side vulnerabilities. Poisoning attacks are …

Defending against data poisoning attack in federated learning with non-IID data

C Yin, Q Zeng - IEEE Transactions on Computational Social …, 2023 - ieeexplore.ieee.org
Federated learning (FL) is an emerging paradigm that allows participants to collaboratively
train deep learning tasks while protecting the privacy of their local data. However, the …

Towards Secure and Verifiable Hybrid Federated Learning

R Du, X Li, D He, KKR Choo - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Reducing computation cost and ensuring update integrity, are key challenges in federated
learning (FL). In this paper, we present a secure and verifiable hybrid FL system for training …

Distributed Backdoor Attacks on Federated Graph Learning and Certified Defenses

Y Yang, Q Li, J Jia, Y Hong, B Wang - arXiv preprint arXiv:2407.08935, 2024 - arxiv.org
Federated graph learning (FedGL) is an emerging federated learning (FL) framework that
extends FL to learn graph data from diverse sources. FL for non-graph data has shown to be …