Learning to grasp from 2.5 D images: A deep reinforcement learning approach

A Bertugli, P Galeone - arXiv preprint arXiv:1908.03440, 2019 - arxiv.org
arXiv preprint arXiv:1908.03440, 2019arxiv.org
In this paper, we propose a deep reinforcement learning (DRL) solution to the grasping
problem using 2.5 D images as the only source of information. In particular, we developed a
simulated environment where a robot equipped with a vacuum gripper has the aim of
reaching blocks with planar surfaces. These blocks can have different dimensions, shapes,
position and orientation. Unity 3D allowed us to simulate a real-world setup, where a depth
camera is placed in a fixed position and the stream of images is used by our policy network …
In this paper, we propose a deep reinforcement learning (DRL) solution to the grasping problem using 2.5D images as the only source of information. In particular, we developed a simulated environment where a robot equipped with a vacuum gripper has the aim of reaching blocks with planar surfaces. These blocks can have different dimensions, shapes, position and orientation. Unity 3D allowed us to simulate a real-world setup, where a depth camera is placed in a fixed position and the stream of images is used by our policy network to learn how to solve the task. We explored different DRL algorithms and problem configurations. The experiments demonstrated the effectiveness of the proposed DRL algorithm applied to grasp tasks guided by visual depth camera inputs. When using the proper policy, the proposed method estimates a robot tool configuration that reaches the object surface with negligible position and orientation errors. This is, to the best of our knowledge, the first successful attempt of using 2.5D images only as of the input of a DRL algorithm, to solve the grasping problem regressing 3D world coordinates.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果

Google学术搜索按钮

example.edu/paper.pdf
搜索
获取 PDF 文件
引用
References