What about inputting policy in value function: Policy representation and policy-extended value function approximator

H Tang, Z Meng, J Hao, C Chen, D Graves… - Proceedings of the …, 2022 - ojs.aaai.org
H Tang, Z Meng, J Hao, C Chen, D Graves, D Li, C Yu, H Mao, W Liu, Y Yang, W Tao…
Proceedings of the AAAI Conference on Artificial Intelligence, 2022ojs.aaai.org
Abstract We study Policy-extended Value Function Approximator (PeVFA) in Reinforcement
Learning (RL), which extends conventional value function approximator (VFA) to take as
input not only the state (and action) but also an explicit policy representation. Such an
extension enables PeVFA to preserve values of multiple policies at the same time and
brings an appealing characteristic, ie, value generalization among policies. We formally
analyze the value generalization under Generalized Policy Iteration (GPI). From theoretical …
Abstract
We study Policy-extended Value Function Approximator (PeVFA) in Reinforcement Learning (RL), which extends conventional value function approximator (VFA) to take as input not only the state (and action) but also an explicit policy representation. Such an extension enables PeVFA to preserve values of multiple policies at the same time and brings an appealing characteristic, ie, value generalization among policies. We formally analyze the value generalization under Generalized Policy Iteration (GPI). From theoretical and empirical lens, we show that generalized value estimates offered by PeVFA may have lower initial approximation error to true values of successive policies, which is expected to improve consecutive value approximation during GPI. Based on above clues, we introduce a new form of GPI with PeVFA which leverages the value generalization along policy improvement path. Moreover, we propose a representation learning framework for RL policy, providing several approaches to learn effective policy embeddings from policy network parameters or state-action pairs. In our experiments, we evaluate the efficacy of value generalization offered by PeVFA and policy representation learning in several OpenAI Gym continuous control tasks. For a representative instance of algorithm implementation, Proximal Policy Optimization (PPO) re-implemented under the paradigm of GPI with PeVFA achieves about 40% performance improvement on its vanilla counterpart in most environments.
ojs.aaai.org
以上显示的是最相近的搜索结果。 查看全部搜索结果