Communication-efficient distributed deep learning: A comprehensive survey

Z Tang, S Shi, W Wang, B Li, X Chu - arXiv preprint arXiv:2003.06307, 2020 - arxiv.org
Distributed deep learning (DL) has become prevalent in recent years to reduce training time
by leveraging multiple computing devices (eg, GPUs/TPUs) due to larger models and …

Fedbcd: A communication-efficient collaborative learning framework for distributed features

Y Liu, X Zhang, Y Kang, L Li, T Chen… - IEEE Transactions …, 2022 - ieeexplore.ieee.org
We introduce a novel federated learning framework allowing multiple parties having different
sets of attributes about the same user to jointly build models without exposing their raw data …

CoCoA: A general framework for communication-efficient distributed optimization

V Smith, S Forte, C Ma, M Takáč, MI Jordan… - Journal of Machine …, 2018 - jmlr.org
The scale of modern datasets necessitates the development of efficient distributed
optimization methods for machine learning. We present a general-purpose framework for …

A communication efficient collaborative learning framework for distributed features

Y Liu, Y Kang, X Zhang, L Li, Y Cheng, T Chen… - arXiv preprint arXiv …, 2019 - arxiv.org
We introduce a collaborative learning framework allowing multiple parties having different
sets of attributes about the same user to jointly build models without exposing their raw data …

Adaptive vertical federated learning on unbalanced features

J Zhang, S Guo, Z Qu, D Zeng, H Wang… - … on Parallel and …, 2022 - ieeexplore.ieee.org
Most of the existing FL systems focus on a data-parallel architecture where training data are
partitioned by samples among several parties. In some real-life applications, however …

Global convergence of block coordinate descent in deep learning

J Zeng, TTK Lau, S Lin, Y Yao - International conference on …, 2019 - proceedings.mlr.press
Deep learning has aroused extensive attention due to its great empirical success. The
efficiency of the block coordinate descent (BCD) methods has been recently demonstrated …

Doubly optimal no-regret online learning in strongly monotone games with bandit feedback

W Ba, T Lin, J Zhang, Z Zhou - Operations Research, 2025 - pubsonline.informs.org
We consider online no-regret learning in unknown games with bandit feedback, where each
player can only observe its reward at each time—determined by all players' current joint …

Stochastic dual coordinate ascent with adaptive probabilities

D Csiba, Z Qu, P Richtárik - International Conference on …, 2015 - proceedings.mlr.press
This paper introduces AdaSDCA: an adaptive variant of stochastic dual coordinate ascent
(SDCA) for solving the regularized empirical risk minimization problems. Our modification …

Fedbcgd: Communication-efficient accelerated block coordinate gradient descent for federated learning

J Liu, F Shang, Y Liu, H Liu, Y Li, YX Gong - Proceedings of the 32nd …, 2024 - dl.acm.org
Although Federated Learning has been widely studied in recent years, there are still high
overhead expenses in each communication round for large-scale models such as Vision …

A distributed second-order algorithm you can trust

C Dünner, A Lucchi, M Gargiani… - International …, 2018 - proceedings.mlr.press
Due to the rapid growth of data and computational resources, distributed optimization has
become an active research area in recent years. While first-order methods seem to dominate …