Human-inspired framework to accelerate reinforcement learning

A Beikmohammadi, S Magnússon - arXiv preprint arXiv:2303.08115, 2023 - arxiv.org
arXiv preprint arXiv:2303.08115, 2023arxiv.org
Reinforcement learning (RL) is crucial for data science decision-making but suffers from
sample inefficiency, particularly in real-world scenarios with costly physical interactions. This
paper introduces a novel human-inspired framework to enhance RL algorithm sample
efficiency. It achieves this by initially exposing the learning agent to simpler tasks that
progressively increase in complexity, ultimately leading to the main task. This method
requires no pre-training and involves learning simpler tasks for just one iteration. The …
Reinforcement learning (RL) is crucial for data science decision-making but suffers from sample inefficiency, particularly in real-world scenarios with costly physical interactions. This paper introduces a novel human-inspired framework to enhance RL algorithm sample efficiency. It achieves this by initially exposing the learning agent to simpler tasks that progressively increase in complexity, ultimately leading to the main task. This method requires no pre-training and involves learning simpler tasks for just one iteration. The resulting knowledge can facilitate various transfer learning approaches, such as value and policy transfer, without increasing computational complexity. It can be applied across different goals, environments, and RL algorithms, including value-based, policy-based, tabular, and deep RL methods. Experimental evaluations demonstrate the framework's effectiveness in enhancing sample efficiency, especially in challenging main tasks, demonstrated through both a simple Random Walk and more complex optimal control problems with constraints.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果