Advances in asynchronous parallel and distributed optimization

M Assran, A Aytekin, HR Feyzmahdavian… - Proceedings of the …, 2020 - ieeexplore.ieee.org
Motivated by large-scale optimization problems arising in the context of machine learning,
there have been several advances in the study of asynchronous parallel and distributed …

FedSA: A semi-asynchronous federated learning mechanism in heterogeneous edge computing

Q Ma, Y Xu, H Xu, Z Jiang, L Huang… - IEEE Journal on …, 2021 - ieeexplore.ieee.org
Federated learning (FL) involves training machine learning models over distributed edge
nodes (ie, workers) while facing three critical challenges, edge heterogeneity, Non-IID data …

Multi-agent reinforcement learning via double averaging primal-dual optimization

HT Wai, Z Yang, Z Wang… - Advances in Neural …, 2018 - proceedings.neurips.cc
Despite the success of single-agent reinforcement learning, multi-agent reinforcement
learning (MARL) remains challenging due to complex interactions between agents …

On the convergence rate of incremental aggregated gradient algorithms

M Gurbuzbalaban, A Ozdaglar, PA Parrilo - SIAM Journal on Optimization, 2017 - SIAM
Motivated by applications to distributed optimization over networks and large-scale data
processing in machine learning, we analyze the deterministic incremental aggregated …

Decentralized quasi-Newton methods

M Eisen, A Mokhtari, A Ribeiro - IEEE Transactions on Signal …, 2017 - ieeexplore.ieee.org
We introduce the decentralized Broyden-Fletcher-Goldfarb-Shanno (D-BFGS) method as a
variation of the BFGS quasi-Newton method for solving decentralized optimization problems …

An asynchronous mini-batch algorithm for regularized stochastic optimization

HR Feyzmahdavian, A Aytekin… - IEEE Transactions on …, 2016 - ieeexplore.ieee.org
Mini-batch optimization has proven to be a powerful paradigm for large-scale learning.
However, the state-of-the-art parallel mini-batch algorithms assume synchronous operation …

Stochastic Approximation with Delayed Updates: Finite-Time Rates under Markovian Sampling

A Adibi, N Dal Fabbro, L Schenato… - International …, 2024 - proceedings.mlr.press
Motivated by applications in large-scale and multi-agent reinforcement learning, we study
the non-asymptotic performance of stochastic approximation (SA) schemes with delayed …

A tight convergence analysis for stochastic gradient descent with delayed updates

Y Arjevani, O Shamir, N Srebro - Algorithmic Learning …, 2020 - proceedings.mlr.press
We establish matching upper and lower complexity bounds for gradient descent and
stochastic gradient descent on quadratic functions, when the gradients are delayed and …

Resilient penalty function method for distributed constrained optimization under byzantine attack

C Xu, Q Liu, T Huang - Information Sciences, 2022 - Elsevier
Distributed optimization algorithms have the advantages of privacy protection and parallel
computing. However, the distributed nature of these algorithms makes the system vulnerable …

Analysis of biased stochastic gradient descent using sequential semidefinite programs

B Hu, P Seiler, L Lessard - Mathematical programming, 2021 - Springer
We present a convergence rate analysis for biased stochastic gradient descent (SGD),
where individual gradient updates are corrupted by computation errors. We develop …