Communication compression techniques in distributed deep learning: A survey

Z Wang, M Wen, Y Xu, Y Zhou, JH Wang… - Journal of Systems …, 2023 - Elsevier
Nowadays, the training data and neural network models are getting increasingly large. The
training time of deep learning will become unbearably long on a single machine. To reduce …

BEER: Fast Rate for Decentralized Nonconvex Optimization with Communication Compression

H Zhao, B Li, Z Li, P Richtárik… - Advances in Neural …, 2022 - proceedings.neurips.cc
Communication efficiency has been widely recognized as the bottleneck for large-scale
decentralized machine learning applications in multi-agent or federated environments. To …

Innovation compression for communication-efficient distributed optimization with linear convergence

J Zhang, K You, L Xie - IEEE Transactions on Automatic …, 2023 - ieeexplore.ieee.org
Information compression is essential to reduce communication cost in distributed
optimization over peer-to-peer networks. This article proposes a communication-efficient …

Improving the transient times for distributed stochastic gradient methods

K Huang, S Pu - IEEE Transactions on Automatic Control, 2022 - ieeexplore.ieee.org
We consider the distributed optimization problem where agents, each possessing a local
cost function, collaboratively minimize the average of the cost functions over a connected …

Compressed gradient tracking for decentralized optimization over general directed networks

Z Song, L Shi, S Pu, M Yan - IEEE Transactions on Signal …, 2022 - ieeexplore.ieee.org
In this paper, we propose two communication-efficient decentralized optimization algorithms
over a general directed multi-agent network. The first algorithm, termed Compressed Push …

Communication compression for distributed nonconvex optimization

X Yi, S Zhang, T Yang, T Chai… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
In this article, we consider distributed nonconvex optimization with the cost functions being
distributed over agents. Noting that information compression is a key tool to reduce the …

Finite-bit quantization for distributed algorithms with linear convergence

N Michelusi, G Scutari, CS Lee - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
This paper studies distributed algorithms for (strongly convex) composite optimization
problems over mesh networks, subject to quantized communications. Instead of focusing on …

A communication-efficient decentralized newton's method with provably faster convergence

H Liu, J Zhang, AMC So, Q Ling - IEEE Transactions on Signal …, 2023 - ieeexplore.ieee.org
In this article, we consider a strongly convex finite-sum minimization problem over a
decentralized network and propose a communication-efficient decentralized Newton's …

Gradient and variable tracking with multiple local sgd for decentralized non-convex learning

S Ge, TH Chang - arXiv preprint arXiv:2302.01537, 2023 - arxiv.org
Stochastic distributed optimization methods that solve an optimization problem over a multi-
agent network have played an important role in a variety of large-scale signal processing …

Decentralized composite optimization with compression

Y Li, X Liu, J Tang, M Yan, K Yuan - arXiv preprint arXiv:2108.04448, 2021 - arxiv.org
Decentralized optimization and communication compression have exhibited their great
potential in accelerating distributed machine learning by mitigating the communication …