Double quantization for communication-efficient distributed optimization

Y Yu, J Wu, L Huang - Advances in Neural Information …, 2019 - proceedings.neurips.cc
Modern distributed training of machine learning models often suffers from high
communication overhead for synchronizing stochastic gradients and model parameters. In …

[PDF][PDF] Double Quantization for Communication-Efficient Distributed Optimization

Y Yu, J Wu, L Huang - ai.tencent.com
Modern distributed training of machine learning models often suffers from high
communication overhead for synchronizing stochastic gradients and model parameters. In …

Double Quantization for Communication-Efficient Distributed Optimization

Y Yu, J Wu, L Huang - Advances in Neural Information …, 2019 - proceedings.neurips.cc
Modern distributed training of machine learning models often suffers from high
communication overhead for synchronizing stochastic gradients and model parameters. In …

Double quantization for communication-efficient distributed optimization

Y Yu, J Wu, L Huang - Proceedings of the 33rd International Conference …, 2019 - dl.acm.org
Modern distributed training of machine learning models often suffers from high
communication overhead for synchronizing stochastic gradients and model parameters. In …

[PDF][PDF] Double Quantization for Communication-Efficient Distributed Optimization

Y Yu, J Wu, L Huang - openreview.net
Modern distributed training of machine learning models often suffers from high
communication overhead for synchronizing stochastic gradients and model parameters. In …

Double Quantization for Communication-Efficient Distributed Optimization

Y Yu, J Wu, L Huang - arXiv e-prints, 2018 - ui.adsabs.harvard.edu
Modern distributed training of machine learning models suffers from high communication
overhead for synchronizing stochastic gradients and model parameters. In this paper, to …

Double Quantization for Communication-Efficient Distributed Optimization

Y Yu, J Wu, L Huang - arXiv preprint arXiv:1805.10111, 2018 - arxiv.org
Modern distributed training of machine learning models suffers from high communication
overhead for synchronizing stochastic gradients and model parameters. In this paper, to …

[PDF][PDF] Double Quantization for Communication-Efficient Distributed Optimization

Y Yu, J Wu, L Huang - papers.neurips.cc
Modern distributed training of machine learning models often suffers from high
communication overhead for synchronizing stochastic gradients and model parameters. In …