作者
Xinyan Dai, Xiao Yan, Kaiwen Zhou, Han Yang, Kelvin KW Ng, James Cheng, Yu Fan
发表日期
2019/11/12
期刊
arXiv preprint arXiv:1911.04655
简介
The high cost of communicating gradients is a major bottleneck for federated learning, as the bandwidth of the participating user devices is limited. Existing gradient compression algorithms are mainly designed for data centers with high-speed network and achieve per-iteration communication cost at best, where is the size of the model. We propose hyper-sphere quantization (HSQ), a general framework that can be configured to achieve a continuum of trade-offs between communication efficiency and gradient accuracy. In particular, at the high compression ratio end, HSQ provides a low per-iteration communication cost of , which is favorable for federated learning. We prove the convergence of HSQ theoretically and show by experiments that HSQ significantly reduces the communication cost of model training without hurting convergence accuracy.
引用总数
2020202120222023202451591410
学术搜索中的文章
X Dai, X Yan, K Zhou, H Yang, KKW Ng, J Cheng… - arXiv preprint arXiv:1911.04655, 2019