Bi-KVIL: Keypoints-based visual imitation learning of bimanual manipulation tasks

J Gao, X Jin, F Krebs, N Jaquier… - 2024 IEEE International …, 2024 - ieeexplore.ieee.org
2024 IEEE International Conference on Robotics and Automation (ICRA), 2024ieeexplore.ieee.org
Visual imitation learning has achieved impressive progress in learning unimanual
manipulation tasks from a small set of visual observations, thanks to the latest advances in
computer vision. However, learning bimanual coordination strategies and complex object
relations from bimanual visual demonstrations, as well as generalizing them to categorical
objects in novel cluttered scenes remain unsolved challenges. In this paper, we extend our
previous work on keypoints-based visual imitation learning (K-VIL)[1] to bimanual …
Visual imitation learning has achieved impressive progress in learning unimanual manipulation tasks from a small set of visual observations, thanks to the latest advances in computer vision. However, learning bimanual coordination strategies and complex object relations from bimanual visual demonstrations, as well as generalizing them to categorical objects in novel cluttered scenes remain unsolved challenges. In this paper, we extend our previous work on keypoints-based visual imitation learning (K-VIL) [1] to bimanual manipulation tasks. The proposed Bi-KVIL jointly extracts so-called Hybrid Master-Slave Relationships (HMSR) among objects and hands, bimanual coordination strategies, and sub-symbolic task representations. Our bimanual task representation is object-centric, embodiment-independent, and viewpoint-invariant, thus generalizing well to categorical objects in novel scenes. We evaluate our approach in various real-world applications, showcasing its ability to learn fine-grained bimanual manipulation tasks from a small number of human demonstration videos. Videos and source code are available at https://sites.google.com/view/bi-kvil.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果