Advances and open problems in federated learning

P Kairouz, HB McMahan, B Avent… - … and trends® in …, 2021 - nowpublishers.com
Federated learning (FL) is a machine learning setting where many clients (eg, mobile
devices or whole organizations) collaboratively train a model under the orchestration of a …

From local SGD to local fixed-point methods for federated learning

G Malinovskiy, D Kovalev, E Gasanov… - International …, 2020 - proceedings.mlr.press
Most algorithms for solving optimization problems or finding saddle points of convex-
concave functions are fixed-point algorithms. In this work we consider the generic problem of …

DoCoFL: Downlink compression for cross-device federated learning

R Dorfman, S Vargaftik… - … on Machine Learning, 2023 - proceedings.mlr.press
Many compression techniques have been proposed to reduce the communication overhead
of Federated Learning training procedures. However, these are typically designed for …

Improving accelerated federated learning with compression and importance sampling

M Grudzień, G Malinovsky, P Richtárik - arXiv preprint arXiv:2306.03240, 2023 - arxiv.org
Federated Learning is a collaborative training framework that leverages heterogeneous data
distributed across a vast number of clients. Since it is practically infeasible to request and …

Partial variable training for efficient on-device federated learning

TJ Yang, D Guliani, F Beaufays… - ICASSP 2022-2022 IEEE …, 2022 - ieeexplore.ieee.org
This paper aims to address the major challenges of Federated Learning (FL) on edge
devices: limited memory and expensive communication. We propose a novel method, called …

A unified analysis of stochastic gradient methods for nonconvex federated optimization

Z Li, P Richtárik - arXiv preprint arXiv:2006.07013, 2020 - arxiv.org
In this paper, we study the performance of a large family of SGD variants in the smooth
nonconvex regime. To this end, we propose a generic and flexible assumption capable of …

Towards a better theoretical understanding of independent subnetwork training

E Shulgin, P Richtárik - arXiv preprint arXiv:2306.16484, 2023 - arxiv.org
Modern advancements in large-scale machine learning would be impossible without the
paradigm of data-parallel distributed computing. Since distributed computing with large …

Local methods with adaptivity via scaling

S Chezhegov, S Skorik, N Khachaturov… - arXiv preprint arXiv …, 2024 - arxiv.org
The rapid development of machine learning and deep learning has introduced increasingly
complex optimization challenges that must be addressed. Indeed, training modern …

Communication cost reduction with partial structure in federated learning

D Kang, CW Ahn - Electronics, 2021 - mdpi.com
Federated learning is a distributed learning algorithm designed to train a single server
model on a server using different clients and their local data. To improve the performance of …

Shifted compression framework: Generalizations and improvements

E Shulgin, P Richtárik - Uncertainty in Artificial Intelligence, 2022 - proceedings.mlr.press
Communication is one of the key bottlenecks in the distributed training of large-scale
machine learning models, and lossy compression of exchanged information, such as …