MORAL: Aligning AI with human norms through multi-objective reinforced active learning

M Peschl, A Zgonnikov, FA Oliehoek… - arXiv preprint arXiv …, 2021 - arxiv.org
arXiv preprint arXiv:2201.00012, 2021arxiv.org
Inferring reward functions from demonstrations and pairwise preferences are auspicious
approaches for aligning Reinforcement Learning (RL) agents with human intentions.
However, state-of-the art methods typically focus on learning a single reward model, thus
rendering it difficult to trade off different reward functions from multiple experts. We propose
Multi-Objective Reinforced Active Learning (MORAL), a novel method for combining diverse
demonstrations of social norms into a Pareto-optimal policy. Through maintaining a …
Inferring reward functions from demonstrations and pairwise preferences are auspicious approaches for aligning Reinforcement Learning (RL) agents with human intentions. However, state-of-the art methods typically focus on learning a single reward model, thus rendering it difficult to trade off different reward functions from multiple experts. We propose Multi-Objective Reinforced Active Learning (MORAL), a novel method for combining diverse demonstrations of social norms into a Pareto-optimal policy. Through maintaining a distribution over scalarization weights, our approach is able to interactively tune a deep RL agent towards a variety of preferences, while eliminating the need for computing multiple policies. We empirically demonstrate the effectiveness of MORAL in two scenarios, which model a delivery and an emergency task that require an agent to act in the presence of normative conflicts. Overall, we consider our research a step towards multi-objective RL with learned rewards, bridging the gap between current reward learning and machine ethics literature.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果