Auto-aligning multiagent incentives with global objectives

M Kwon, JP Agapiou, EA Duéñez-Guzmán… - ICML Workshop on …, 2023 - openreview.net
ICML Workshop on Localized Learning (LLW), 2023openreview.net
The general ability to achieve a singular task with a set of decentralized, intelligent agents is
an important goal in multiagent research. The complex interaction between individual
agents' incentives makes designing their objectives such that the resulting multiagent
system aligns with a desired global goal particularly challenging. In this work, instead of
considering the problem of designing suitable incentives from scratch, we assume a
multiagent system with given preset incentives and consider $\textit {automatically …
The general ability to achieve a singular task with a set of decentralized, intelligent agents is an important goal in multiagent research. The complex interaction between individual agents' incentives makes designing their objectives such that the resulting multiagent system aligns with a desired global goal particularly challenging. In this work, instead of considering the problem of designing suitable incentives from scratch, we assume a multiagent system with given preset incentives and consider $\textit{automatically modifying}$ these incentives online to achieve a new goal. This reduces the search space over possible individual incentives and takes advantage of the effort instilled by the previous system designer. We demonstrate the promise as well as the limitations of re-purposing multiagent systems in this way, both theoretically and empirically, on a variety of domains. Surprisingly, we show that training a diverse multiagent system to align with a modified global objective ($g \rightarrow g')$ can, in at least one case, lead to better generalization performance in unseen test scenarios, when evaluated on the original objective ($g$).
openreview.net
以上显示的是最相近的搜索结果。 查看全部搜索结果