An asynchronous multi-agent actor-critic algorithm for distributed reinforcement learning

Y Lin, Y Luo, K Zhang, Z Yang, Z Wang… - NeurIPS Optimization …, 2019 - par.nsf.gov
Y Lin, Y Luo, K Zhang, Z Yang, Z Wang, T Basar, R Sandhu, J Liu
NeurIPS Optimization Foundations for Reinforcement Learning Workshop, 2019par.nsf.gov
This paper studies a distributed reinforcement learning problem in which a network of
multiple agents aim to cooperatively maximize the globally averaged return through
communication with only local neighbors. An asynchronous multi-agent actor-critic algorithm
is proposed for possibly unidirectional communication relationships depicted by a directed
graph. Each agent independently updates its variables at “event times” determined by its
own clock. It is not assumed that the agents' clocks are synchronized or that the event times …
This paper studies a distributed reinforcement learning problem in which a network of multiple agents aim to cooperatively maximize the globally averaged return through communication with only local neighbors. An asynchronous multi-agent actor-critic algorithm is proposed for possibly unidirectional communication relationships depicted by a directed graph. Each agent independently updates its variables at “event times” determined by its own clock. It is not assumed that the agents’ clocks are synchronized or that the event times are evenly spaced. It is shown that the algorithm can solve the problem for any strongly connected graph in the presence of communication and computation delays.
par.nsf.gov
以上显示的是最相近的搜索结果。 查看全部搜索结果