Robotic skill acquisition via instruction augmentation with vision-language models

T Xiao, H Chan, P Sermanet, A Wahid… - arXiv preprint arXiv …, 2022 - arxiv.org
In recent years, much progress has been made in learning robotic manipulation policies that
follow natural language instructions. Such methods typically learn from corpora of robot …

Mimicplay: Long-horizon imitation learning by watching human play

C Wang, L Fan, J Sun, R Zhang, L Fei-Fei, D Xu… - arXiv preprint arXiv …, 2023 - arxiv.org
Imitation learning from human demonstrations is a promising paradigm for teaching robots
manipulation skills in the real world. However, learning complex long-horizon tasks often …

Cross-domain transfer via semantic skill imitation

K Pertsch, R Desai, V Kumar, F Meier, JJ Lim… - arXiv preprint arXiv …, 2022 - arxiv.org
We propose an approach for semantic imitation, which uses demonstrations from a source
domain, eg human videos, to accelerate reinforcement learning (RL) in a different target …

Simultaneously learning transferable symbols and language groundings from perceptual data for instruction following

N Gopalan, E Rosen, GD Konidaris… - Robotics: Science and …, 2020 - par.nsf.gov
Enabling robots to learn tasks and follow instructions as easily as humans is important for
many real-world robot applications. Previous approaches have applied machine learning to …

Preference-Conditioned Language-Guided Abstraction

A Peng, A Bobu, BZ Li, TR Sumers… - Proceedings of the …, 2024 - dl.acm.org
Learning from demonstrations is a common way for users to teach robots, but it is prone to
spurious feature correlations. Recent work constructs state abstractions, ie visual …

Viola: Imitation learning for vision-based manipulation with object proposal priors

Y Zhu, A Joshi, P Stone, Y Zhu - Conference on Robot …, 2023 - proceedings.mlr.press
We introduce VIOLA, an object-centric imitation learning approach to learning closed-loop
visuomotor policies for robot manipulation. Our approach constructs object-centric …

Bridgedata v2: A dataset for robot learning at scale

HR Walke, K Black, TZ Zhao, Q Vuong… - … on Robot Learning, 2023 - proceedings.mlr.press
We introduce BridgeData V2, a large and diverse dataset of robotic manipulation behaviors
designed to facilitate research in scalable robot learning. BridgeData V2 contains 53,896 …

Learning to learn faster from human feedback with language model predictive control

J Liang, F Xia, W Yu, A Zeng, MG Arenas… - arXiv preprint arXiv …, 2024 - arxiv.org
Large language models (LLMs) have been shown to exhibit a wide range of capabilities,
such as writing robot code from language commands--enabling non-experts to direct robot …

Viola: Object-centric imitation learning for vision-based robot manipulation

Y Zhu, A Joshi, P Stone, Y Zhu - 6th Annual Conference on Robot …, 2022 - openreview.net
We introduce VIOLA, an object-centric imitation learning approach to learning closed-loop
visuomotor policies for robot manipulation. Our approach constructs object-centric …

Learning multi-step manipulation tasks from a single human demonstration

D Guo - arXiv preprint arXiv:2312.15346, 2023 - arxiv.org
Learning from human demonstrations has exhibited remarkable achievements in robot
manipulation. However, the challenge remains to develop a robot system that matches …