Multi-agent reinforcement learning: A selective overview of theories and algorithms

K Zhang, Z Yang, T Başar - Handbook of reinforcement learning and …, 2021 - Springer
Recent years have witnessed significant advances in reinforcement learning (RL), which
has registered tremendous success in solving various sequential decision-making problems …

A survey of distributed optimization

T Yang, X Yi, J Wu, Y Yuan, D Wu, Z Meng… - Annual Reviews in …, 2019 - Elsevier
In distributed optimization of multi-agent systems, agents cooperate to minimize a global
function which is a sum of local objective functions. Motivated by applications including …

A review of cooperative multi-agent deep reinforcement learning

A Oroojlooy, D Hajinezhad - Applied Intelligence, 2023 - Springer
Abstract Deep Reinforcement Learning has made significant progress in multi-agent
systems in recent years. The aim of this review article is to provide an overview of recent …

Linear convergence in federated learning: Tackling client heterogeneity and sparse gradients

A Mitra, R Jaafar, GJ Pappas… - Advances in Neural …, 2021 - proceedings.neurips.cc
We consider a standard federated learning (FL) setup where a group of clients periodically
coordinate with a central server to train a statistical model. We develop a general algorithmic …

Network topology and communication-computation tradeoffs in decentralized optimization

A Nedić, A Olshevsky, MG Rabbat - Proceedings of the IEEE, 2018 - ieeexplore.ieee.org
In decentralized optimization, nodes cooperate to minimize an overall objective function that
is the sum (or average) of per-node private objective functions. Algorithms interleave local …

An improved analysis of gradient tracking for decentralized machine learning

A Koloskova, T Lin, SU Stich - Advances in Neural …, 2021 - proceedings.neurips.cc
We consider decentralized machine learning over a network where the training data is
distributed across $ n $ agents, each of which can compute stochastic model updates on …

Achieving geometric convergence for distributed optimization over time-varying graphs

A Nedic, A Olshevsky, W Shi - SIAM Journal on Optimization, 2017 - SIAM
This paper considers the problem of distributed optimization over time-varying graphs. For
the case of undirected graphs, we introduce a distributed algorithm, referred to as DIGing …

Push–pull gradient methods for distributed optimization in networks

S Pu, W Shi, J Xu, A Nedić - IEEE Transactions on Automatic …, 2020 - ieeexplore.ieee.org
In this article, we focus on solving a distributed convex optimization problem in a network,
where each agent has its own convex cost function and the goal is to minimize the sum of …

Distributed stochastic gradient tracking methods

S Pu, A Nedić - Mathematical Programming, 2021 - Springer
In this paper, we study the problem of distributed multi-agent optimization over a network,
where each agent possesses a local cost function that is smooth and strongly convex. The …

Distributed gradient methods for convex machine learning problems in networks: Distributed optimization

A Nedic - IEEE Signal Processing Magazine, 2020 - ieeexplore.ieee.org
This article provides an overview of distributed gradient methods for solving convex machine
learning problems of the form minxRn (1/m) ΣR i= 1 fi (x) in a system consisting of mm …