Q-GADMM: Quantized group ADMM for communication efficient decentralized machine learning

A Elgabli, J Park, AS Bedi, CB Issaid… - IEEE Transactions …, 2020 - ieeexplore.ieee.org
In this article, we propose a communication-efficient decentralized machine learning (ML)
algorithm, coined quantized group ADMM (Q-GADMM). To reduce the number of …

Laplacian matrix sampling for communication-efficient decentralized learning

CC Chiu, X Zhang, T He, S Wang… - IEEE Journal on …, 2023 - ieeexplore.ieee.org
We consider the problem of training a given machine learning model by decentralized
parallel stochastic gradient descent over training data distributed across multiple nodes …

GADMM: Fast and communication efficient framework for distributed machine learning

A Elgabli, J Park, AS Bedi, M Bennis… - Journal of Machine …, 2020 - jmlr.org
When the data is distributed across multiple servers, lowering the communication cost
between the servers (or workers) while solving the distributed learning problem is an …

Double quantization for communication-efficient distributed optimization

Y Yu, J Wu, L Huang - Advances in Neural Information …, 2019 - proceedings.neurips.cc
Modern distributed training of machine learning models often suffers from high
communication overhead for synchronizing stochastic gradients and model parameters. In …

Communication-efficient decentralized machine learning over heterogeneous networks

P Zhou, Q Lin, D Loghin, BC Ooi… - 2021 IEEE 37th …, 2021 - ieeexplore.ieee.org
In the last few years, distributed machine learning has been usually executed over
heterogeneous networks such as a local area network within a multi-tenant cluster or a wide …

A linear speedup analysis of distributed deep learning with sparse and quantized communication

P Jiang, G Agrawal - Advances in Neural Information …, 2018 - proceedings.neurips.cc
The large communication overhead has imposed a bottleneck on the performance of
distributed Stochastic Gradient Descent (SGD) for training deep neural networks. Previous …

Communication-efficient distributed learning via lazily aggregated quantized gradients

J Sun, T Chen, G Giannakis… - Advances in Neural …, 2019 - proceedings.neurips.cc
The present paper develops a novel aggregated gradient approach for distributed machine
learning that adaptively compresses the gradient communication. The key idea is to first …

Robust communication-efficient decentralized learning with heterogeneity

X Zhang, Y Wang, S Chen, C Wang, D Yu… - Journal of Systems …, 2023 - Elsevier
In this paper, we propose a robust communication-efficient decentralized learning algorithm,
named RCEDL, to address data heterogeneity, communication heterogeneity and …

NUQSGD: Provably communication-efficient data-parallel SGD via nonuniform quantization

A Ramezani-Kebrya, F Faghri, I Markov… - Journal of Machine …, 2021 - jmlr.org
As the size and complexity of models and datasets grow, so does the need for
communicatione fficient variants of stochastic gradient descent that can be deployed to …

Grace: A compressed communication framework for distributed machine learning

H Xu, CY Ho, AM Abdelmoniem, A Dutta… - 2021 IEEE 41st …, 2021 - ieeexplore.ieee.org
Powerful computer clusters are used nowadays to train complex deep neural networks
(DNN) on large datasets. Distributed training increasingly becomes communication bound …