Momentum benefits non-iid federated learning simply and provably

Z Cheng, X Huang, P Wu, K Yuan - arXiv preprint arXiv:2306.16504, 2023 - arxiv.org
Federated learning is a powerful paradigm for large-scale machine learning, but it faces
significant challenges due to unreliable network connections, slow communication, and …

Local exact-diffusion for decentralized optimization and learning

SA Alghunaim - IEEE Transactions on Automatic Control, 2024 - ieeexplore.ieee.org
Distributed optimization methods with local updates have recently attracted a lot of attention
due to their potential to reduce the communication cost of distributed methods. In these …

Which mode is better for federated learning? Centralized or Decentralized

Y Sun, L Shen, D Tao - arXiv preprint arXiv:2310.03461, 2023 - arxiv.org
Both centralized and decentralized approaches have shown excellent performance and
great application value in federated learning (FL). However, current studies do not provide …

Balancing communication and computation in gradient tracking algorithms for decentralized optimization

AS Berahas, R Bollapragada, S Gupta - Journal of Optimization Theory …, 2024 - Springer
Gradient tracking methods have emerged as one of the most popular approaches for solving
decentralized optimization problems over networks. In this setting, each node in the network …

RandCom: Random Communication Skipping Method for Decentralized Stochastic Optimization

L Guo, SA Alghunaim, K Yuan, L Condat… - arXiv preprint arXiv …, 2023 - arxiv.org
Distributed optimization methods with random communication skips are gaining increasing
attention due to their proven benefits in accelerating communication complexity …

Gradient and variable tracking with multiple local sgd for decentralized non-convex learning

S Ge, TH Chang - arXiv preprint arXiv:2302.01537, 2023 - arxiv.org
Stochastic distributed optimization methods that solve an optimization problem over a multi-
agent network have played an important role in a variety of large-scale signal processing …

Gradient Tracking with Multiple Local SGD for Decentralized Non-Convex Learning

S Ge, TH Chang - 2023 62nd IEEE Conference on Decision …, 2023 - ieeexplore.ieee.org
The stochastic Gradient Tracking (GT) method for distributed optimization, is known to be
robust against the inter-client variance caused by data heterogeneity. However, the …

On the computation-communication trade-off with a flexible gradient tracking approach

Y Huang, J Xu - 2023 62nd IEEE Conference on Decision and …, 2023 - ieeexplore.ieee.org
We propose a flexible gradient tracking approach with adjustable computation and
communication steps for solving distributed stochastic optimization problems over networks …

Robust Decentralized Learning with Local Updates and Gradient Tracking

S Ghiasvand, A Reisizadeh, M Alizadeh… - arXiv preprint arXiv …, 2024 - arxiv.org
As distributed learning applications such as Federated Learning, the Internet of Things (IoT),
and Edge Computing grow, it is critical to address the shortcomings of such technologies …

Decentralized Sporadic Federated Learning: A Unified Methodology with Generalized Convergence Guarantees

S Zehtabi, DJ Han, R Parasnis… - arXiv preprint arXiv …, 2024 - arxiv.org
Decentralized Federated Learning (DFL) has received significant recent research attention,
capturing settings where both model updates and model aggregations--the two key FL …