Controlvae: Model-based learning of generative controllers for physics-based characters

H Yao, Z Song, B Chen, L Liu - ACM Transactions on Graphics (TOG), 2022 - dl.acm.org
ACM Transactions on Graphics (TOG), 2022dl.acm.org
In this paper, we introduce ControlVAE, a novel model-based framework for learning
generative motion control policies based on variational autoencoders (VAE). Our framework
can learn a rich and flexible latent representation of skills and a skill-conditioned generative
control policy from a diverse set of unorganized motion sequences, which enables the
generation of realistic human behaviors by sampling in the latent space and allows high-
level control policies to reuse the learned skills to accomplish a variety of downstream tasks …
In this paper, we introduce ControlVAE, a novel model-based framework for learning generative motion control policies based on variational autoencoders (VAE). Our framework can learn a rich and flexible latent representation of skills and a skill-conditioned generative control policy from a diverse set of unorganized motion sequences, which enables the generation of realistic human behaviors by sampling in the latent space and allows high-level control policies to reuse the learned skills to accomplish a variety of downstream tasks. In the training of ControlVAE, we employ a learnable world model to realize direct supervision of the latent space and the control policy. This world model effectively captures the unknown dynamics of the simulation system, enabling efficient model-based learning of high-level downstream tasks. We also learn a state-conditional prior distribution in the VAE-based generative control policy, which generates a skill embedding that outperforms the non-conditional priors in downstream tasks. We demonstrate the effectiveness of ControlVAE using a diverse set of tasks, which allows realistic and interactive control of the simulated characters.
ACM Digital Library
以上显示的是最相近的搜索结果。 查看全部搜索结果