Heterogeneous federated learning: State-of-the-art and research challenges

M Ye, X Fang, B Du, PC Yuen, D Tao - ACM Computing Surveys, 2023 - dl.acm.org
Federated learning (FL) has drawn increasing attention owing to its potential use in large-
scale industrial applications. Existing FL works mainly focus on model homogeneous …

A comprehensive survey on poisoning attacks and countermeasures in machine learning

Z Tian, L Cui, J Liang, S Yu - ACM Computing Surveys, 2022 - dl.acm.org
The prosperity of machine learning has been accompanied by increasing attacks on the
training process. Among them, poisoning attacks have become an emerging threat during …

ChatGPT and environmental research

JJ Zhu, J Jiang, M Yang, ZJ Ren - Environmental Science & …, 2023 - ACS Publications
ChatGPT, the latest text-based artificial intelligence (AI) tool, has quickly gained popularity
and is poised to revolutionize various aspects of our lives, including education and research …

On the opportunities and risks of foundation models

R Bommasani, DA Hudson, E Adeli, R Altman… - arXiv preprint arXiv …, 2021 - arxiv.org
AI is undergoing a paradigm shift with the rise of models (eg, BERT, DALL-E, GPT-3) that are
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …

The curse of recursion: Training on generated data makes models forget

I Shumailov, Z Shumaylov, Y Zhao, Y Gal… - arXiv preprint arXiv …, 2023 - arxiv.org
Stable Diffusion revolutionised image creation from descriptive text. GPT-2, GPT-3 (. 5) and
GPT-4 demonstrated astonishing performance across a variety of language tasks. ChatGPT …

On the exploitability of instruction tuning

M Shu, J Wang, C Zhu, J Geiping… - Advances in Neural …, 2023 - proceedings.neurips.cc
Instruction tuning is an effective technique to align large language models (LLMs) with
human intent. In this work, we investigate how an adversary can exploit instruction tuning by …

Poisoning web-scale training datasets is practical

N Carlini, M Jagielski, CA Choquette-Choo… - arXiv preprint arXiv …, 2023 - arxiv.org
Deep learning models are often trained on distributed, webscale datasets crawled from the
internet. In this paper, we introduce two new dataset poisoning attacks that intentionally …

Ditto: Fair and robust federated learning through personalization

T Li, S Hu, A Beirami, V Smith - International conference on …, 2021 - proceedings.mlr.press
Fairness and robustness are two important concerns for federated learning systems. In this
work, we identify that robustness to data and model poisoning attacks and fairness …

Fltrust: Byzantine-robust federated learning via trust bootstrapping

X Cao, M Fang, J Liu, NZ Gong - arXiv preprint arXiv:2012.13995, 2020 - arxiv.org
Byzantine-robust federated learning aims to enable a service provider to learn an accurate
global model when a bounded number of clients are malicious. The key idea of existing …

A survey on security and privacy of federated learning

V Mothukuri, RM Parizi, S Pouriyeh, Y Huang… - Future Generation …, 2021 - Elsevier
Federated learning (FL) is a new breed of Artificial Intelligence (AI) that builds upon
decentralized data and training that brings learning to the edge or directly on-device. FL is a …