[HTML][HTML] Asynchronous federated learning on heterogeneous devices: A survey

C Xu, Y Qu, Y Xiang, L Gao - Computer Science Review, 2023 - Elsevier
Federated learning (FL) is a kind of distributed machine learning framework, where the
global model is generated on the centralized aggregation server based on the parameters of …

Local differential privacy and its applications: A comprehensive survey

M Yang, T Guo, T Zhu, I Tjuawinata, J Zhao… - Computer Standards & …, 2023 - Elsevier
With the rapid development of low-cost consumer electronics and pervasive adoption of next
generation wireless communication technologies, a tremendous amount of data has been …

Fltrust: Byzantine-robust federated learning via trust bootstrapping

X Cao, M Fang, J Liu, NZ Gong - arXiv preprint arXiv:2012.13995, 2020 - arxiv.org
Byzantine-robust federated learning aims to enable a service provider to learn an accurate
global model when a bounded number of clients are malicious. The key idea of existing …

Survey on federated learning threats: Concepts, taxonomy on attacks and defences, experimental study and challenges

N Rodríguez-Barroso, D Jiménez-López, MV Luzón… - Information …, 2023 - Elsevier
Federated learning is a machine learning paradigm that emerges as a solution to the privacy-
preservation demands in artificial intelligence. As machine learning, federated learning is …

Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses

M Goldblum, D Tsipras, C Xie, X Chen… - … on Pattern Analysis …, 2022 - ieeexplore.ieee.org
As machine learning systems grow in scale, so do their training data requirements, forcing
practitioners to automate and outsource the curation of training data in order to achieve state …

Hidden backdoors in human-centric language models

S Li, H Liu, T Dong, BZH Zhao, M Xue, H Zhu… - Proceedings of the 2021 …, 2021 - dl.acm.org
Natural language processing (NLP) systems have been proven to be vulnerable to backdoor
attacks, whereby hidden features (backdoors) are trained into a language model and may …

Certified robustness of nearest neighbors against data poisoning and backdoor attacks

J Jia, Y Liu, X Cao, NZ Gong - Proceedings of the AAAI Conference on …, 2022 - ojs.aaai.org
Data poisoning attacks and backdoor attacks aim to corrupt a machine learning classifier via
modifying, adding, and/or removing some carefully selected training examples, such that the …

A survey of what to share in federated learning: Perspectives on model utility, privacy leakage, and communication efficiency

J Shao, Z Li, W Sun, T Zhou, Y Sun, L Liu, Z Lin… - arXiv preprint arXiv …, 2023 - arxiv.org
Federated learning (FL) has emerged as a highly effective paradigm for privacy-preserving
collaborative training among different parties. Unlike traditional centralized learning, which …

{PoisonedEncoder}: Poisoning the unlabeled pre-training data in contrastive learning

H Liu, J Jia, NZ Gong - 31st USENIX Security Symposium (USENIX …, 2022 - usenix.org
Contrastive learning pre-trains an image encoder using a large amount of unlabeled data
such that the image encoder can be used as a general-purpose feature extractor for various …

Manipulation attacks in local differential privacy

A Cheu, A Smith, J Ullman - 2021 IEEE Symposium on Security …, 2021 - ieeexplore.ieee.org
Local differential privacy is a widely studied restriction on distributed algorithms that collect
aggregates about sensitive user data, and is now deployed in several large systems. We …