Q Ma, Y Xu, H Xu, Z Jiang, L Huang… - IEEE Journal on …, 2021 - ieeexplore.ieee.org
Federated learning (FL) involves training machine learning models over distributed edge nodes (ie, workers) while facing three critical challenges, edge heterogeneity, Non-IID data …
HT Wai, Z Yang, Z Wang… - Advances in Neural …, 2018 - proceedings.neurips.cc
Despite the success of single-agent reinforcement learning, multi-agent reinforcement learning (MARL) remains challenging due to complex interactions between agents …
Motivated by applications to distributed optimization over networks and large-scale data processing in machine learning, we analyze the deterministic incremental aggregated …
We introduce the decentralized Broyden-Fletcher-Goldfarb-Shanno (D-BFGS) method as a variation of the BFGS quasi-Newton method for solving decentralized optimization problems …
Mini-batch optimization has proven to be a powerful paradigm for large-scale learning. However, the state-of-the-art parallel mini-batch algorithms assume synchronous operation …
A Adibi, N Dal Fabbro, L Schenato… - International …, 2024 - proceedings.mlr.press
Motivated by applications in large-scale and multi-agent reinforcement learning, we study the non-asymptotic performance of stochastic approximation (SA) schemes with delayed …
Y Arjevani, O Shamir, N Srebro - Algorithmic Learning …, 2020 - proceedings.mlr.press
We establish matching upper and lower complexity bounds for gradient descent and stochastic gradient descent on quadratic functions, when the gradients are delayed and …
C Xu, Q Liu, T Huang - Information Sciences, 2022 - Elsevier
Distributed optimization algorithms have the advantages of privacy protection and parallel computing. However, the distributed nature of these algorithms makes the system vulnerable …
We present a convergence rate analysis for biased stochastic gradient descent (SGD), where individual gradient updates are corrupted by computation errors. We develop …