Modern distributed training of machine learning models often suffers from high communication overhead for synchronizing stochastic gradients and model parameters. In …
Y Yu, J Wu, L Huang - Advances in Neural Information …, 2019 - proceedings.neurips.cc
Modern distributed training of machine learning models often suffers from high communication overhead for synchronizing stochastic gradients and model parameters. In …
Y Yu, J Wu, L Huang - Proceedings of the 33rd International Conference …, 2019 - dl.acm.org
Modern distributed training of machine learning models often suffers from high communication overhead for synchronizing stochastic gradients and model parameters. In …
Modern distributed training of machine learning models often suffers from high communication overhead for synchronizing stochastic gradients and model parameters. In …
Y Yu, J Wu, L Huang - arXiv e-prints, 2018 - ui.adsabs.harvard.edu
Modern distributed training of machine learning models suffers from high communication overhead for synchronizing stochastic gradients and model parameters. In this paper, to …
Y Yu, J Wu, L Huang - arXiv preprint arXiv:1805.10111, 2018 - arxiv.org
Modern distributed training of machine learning models suffers from high communication overhead for synchronizing stochastic gradients and model parameters. In this paper, to …
Modern distributed training of machine learning models often suffers from high communication overhead for synchronizing stochastic gradients and model parameters. In …