Using goal-conditioned reinforcement learning with deep imitation to control robot arm in flexible flat cable assembly task

J Li, H Shi, KS Hwang - IEEE Transactions on Automation …, 2023 - ieeexplore.ieee.org
J Li, H Shi, KS Hwang
IEEE Transactions on Automation Science and Engineering, 2023ieeexplore.ieee.org
Leveraging reinforcement learning on high-precision decision-making in Robot Arm
assembly scenes is a desired goal in the industrial community. However, tasks like Flexible
Flat Cable (FFC) assembly, which require highly trained workers, pose significant
challenges due to sparse rewards and limited learning conditions. In this work, we propose
a goal-conditioned self-imitation reinforcement learning method for FFC assembly without
relying on a specific end-effector, where both perception and behavior plannings are …
Leveraging reinforcement learning on high-precision decision-making in Robot Arm assembly scenes is a desired goal in the industrial community. However, tasks like Flexible Flat Cable (FFC) assembly, which require highly trained workers, pose significant challenges due to sparse rewards and limited learning conditions. In this work, we propose a goal-conditioned self-imitation reinforcement learning method for FFC assembly without relying on a specific end-effector, where both perception and behavior plannings are learned through reinforcement learning. We analyze the challenges faced by Robot Arm in high-precision assembly scenarios and balance the breadth and depth of exploration during training. Our end-to-end model consists of hindsight and self-imitation modules, allowing the Robot Arm to leverage futile exploration and optimize successful trajectories. Our method does not require rule-based or manual rewards, and it enables the Robot Arm to quickly find feasible solutions through experience relabeling, while unnecessary explorations are avoided. We train the FFC assembly policy in a simulation environment and transfer it to the real scenario by using domain adaptation. We explore various combinations of hindsight and self-imitation learning, and discuss the results comprehensively. Experimental findings demonstrate that our model achieves fast and advanced flexible flat cable assembly, surpassing other reinforcement learning-based methods. Note to Practitioners —The motivation of this article stems from the need to develop an efficient and accurate FFC assembly policy for 3C (Computer, Communication, and Consumer Electronic) industry, promoting the development of intelligent manufacturing. Traditional control methods are incompetent to complete such a high-precision task with Robot Arm due to the difficult-to-model connectors, and existing reinforcement learning methods cannot converge with restricted epochs because of the difficult goals or trajectories. To quickly learn a high-quality assembly for Robot Arm and accelerate the convergence speed, we combine the goal-conditioned reinforcement learning and self-imitation mechanism, balancing the depth and breadth of exploration. The proposal takes visual information and six-dimensions force as state, obtaining satisfactory assembly policies. We build a simulation scene by the Pybullet platform and pre-train the Robot Arm on it, and then the pre-trained policies can be reused in real scenarios with finetuning.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果