With the growth of data and necessity for distributed optimization methods, solvers that work well on a single machine must be re-designed to leverage distributed computation. Recent …
We study optimization algorithms for the finite sum problems frequently arising in machine learning applications. First, we propose novel variants of stochastic gradient descent with a …
Synchronous mini-batch SGD is state-of-the-art for large-scale distributed machine learning. However, in practice, its convergence is bottlenecked by slow communication rounds …
S Kar, B Swenson - arXiv preprint arXiv:1901.00214, 2019 - arxiv.org
We consider $ K $-means clustering in networked environments (eg, internet of things (IoT) and sensor networks) where data is inherently distributed across nodes and processing …
The paper addresses design and analysis of communication-efficient distributed algorithms for solving weighted non-linear least squares problems in multi-agent networks …
B Zhang, J Geng, W Xu, L Lai - 2018 52nd Annual Conference …, 2018 - ieeexplore.ieee.org
One major bottleneck in the design of large scale distributed machine learning algorithms is the communication cost. In this paper, we propose and analyze a distributed learning …
K Mishchenko - arXiv preprint arXiv:2110.12281, 2021 - arxiv.org
Many recent successes of machine learning went hand in hand with advances in optimization. The exchange of ideas between these fields has worked both ways, with …
Synchronous mini-batch SGD is state-of-the-art for large-scale distributed machine learning. However, in practice, its convergence is bottlenecked by slow communication rounds …