This paper presents a new class of gradient methods for distributed machine learning that adaptively skip the gradient calculations to learn with reduced communication and …
The present paper develops a novel aggregated gradient approach for distributed machine learning that adaptively compresses the gradient communication. The key idea is to first …
SA Alghunaim - IEEE Transactions on Automatic Control, 2024 - ieeexplore.ieee.org
Distributed optimization methods with local updates have recently attracted a lot of attention due to their potential to reduce the communication cost of distributed methods. In these …
Asynchronous computation and gradient compression have emerged as two key techniques for achieving scalability in distributed optimization for large-scale machine learning. This …
Recently, there is a growing interest in the study of median-based algorithms for distributed non-convex optimization. Two prominent examples include signSGD with majority vote, an …
Distributed optimization is vital in solving large-scale machine learning problems. A widely- shared feature of distributed optimization techniques is the requirement that all nodes …
J Wu, W Huang, J Huang… - … Conference on Machine …, 2018 - proceedings.mlr.press
Large-scale distributed optimization is of great importance in various applications. For data- parallel based distributed learning, the inter-node gradient communication often becomes …
We consider distributed optimization over several devices, each sending incremental model updates to a central server. This setting is considered, for instance, in federated learning …
B Li, S Cen, Y Chen, Y Chi - Journal of Machine Learning Research, 2020 - jmlr.org
There is growing interest in large-scale machine learning and optimization over decentralized networks, eg in the context of multi-agent learning and federated learning …