Most algorithms for solving optimization problems or finding saddle points of convex- concave functions are fixed-point algorithms. In this work we consider the generic problem of …
Many compression techniques have been proposed to reduce the communication overhead of Federated Learning training procedures. However, these are typically designed for …
Federated Learning is a collaborative training framework that leverages heterogeneous data distributed across a vast number of clients. Since it is practically infeasible to request and …
This paper aims to address the major challenges of Federated Learning (FL) on edge devices: limited memory and expensive communication. We propose a novel method, called …
Z Li, P Richtárik - arXiv preprint arXiv:2006.07013, 2020 - arxiv.org
In this paper, we study the performance of a large family of SGD variants in the smooth nonconvex regime. To this end, we propose a generic and flexible assumption capable of …
Modern advancements in large-scale machine learning would be impossible without the paradigm of data-parallel distributed computing. Since distributed computing with large …
S Chezhegov, S Skorik, N Khachaturov… - arXiv preprint arXiv …, 2024 - arxiv.org
The rapid development of machine learning and deep learning has introduced increasingly complex optimization challenges that must be addressed. Indeed, training modern …
Federated learning is a distributed learning algorithm designed to train a single server model on a server using different clients and their local data. To improve the performance of …
E Shulgin, P Richtárik - Uncertainty in Artificial Intelligence, 2022 - proceedings.mlr.press
Communication is one of the key bottlenecks in the distributed training of large-scale machine learning models, and lossy compression of exchanged information, such as …