Ganhand: Predicting human grasp affordances in multi-object scenes

E Corona, A Pumarola, G Alenya… - Proceedings of the …, 2020 - openaccess.thecvf.com
Proceedings of the IEEE/CVF conference on computer vision and …, 2020openaccess.thecvf.com
The rise of deep learning has brought remarkable progress in estimating hand geometry
from images where the hands are part of the scene. This paper focuses on a new problem
not explored so far, consisting in predicting how a human would grasp one or several
objects, given a single RGB image of these objects. This is a problem with enormous
potential in eg augmented reality, robotics or prosthetic design. In order to predict feasible
grasps, we need to understand the semantic content of the image, its geometric structure …
Abstract
The rise of deep learning has brought remarkable progress in estimating hand geometry from images where the hands are part of the scene. This paper focuses on a new problem not explored so far, consisting in predicting how a human would grasp one or several objects, given a single RGB image of these objects. This is a problem with enormous potential in eg augmented reality, robotics or prosthetic design. In order to predict feasible grasps, we need to understand the semantic content of the image, its geometric structure and all potential interactions with a hand physical model. To this end, we introduce a generative model that jointly reasons in all these levels and 1) regresses the 3D shape and pose of the objects in the scene; 2) estimates the grasp types; and 3) refines the 51-DoF of a 3D hand model that minimize a graspability loss. To train this model we build the YCB-Affordance dataset, that contains more than 133k images of 21 objects in the YCB-Video dataset. We have annotated these images with more than 28M plausible 3D human grasps according to a 33-class taxonomy. A thorough evaluation in synthetic and real images shows that our model can robustly predict realistic grasps, even in cluttered scenes with multiple objects in close contact.
openaccess.thecvf.com
以上显示的是最相近的搜索结果。 查看全部搜索结果