Learning to act from actionless videos through dense correspondences

PC Ko, J Mao, Y Du, SH Sun… - arXiv preprint arXiv …, 2023 - arxiv.org
In this work, we present an approach to construct a video-based robot policy capable of
reliably executing diverse tasks across different robots and environments from few video …

An unbiased look at datasets for visuo-motor pre-training

S Dasari, MK Srirama, U Jain… - Conference on Robot …, 2023 - proceedings.mlr.press
Visual representation learning hold great promise for robotics, but is severely hampered by
the scarcity and homogeneity of robotics datasets. Recent works address this problem by …

Imitation from observation: Learning to imitate behaviors from raw video via context translation

YX Liu, A Gupta, P Abbeel… - 2018 IEEE international …, 2018 - ieeexplore.ieee.org
Imitation learning is an effective approach for autonomous systems to acquire control
policies when an explicit reward function is unavailable, using supervision provided as …

From play to policy: Conditional behavior generation from uncurated robot data

ZJ Cui, Y Wang, NMM Shafiullah, L Pinto - arXiv preprint arXiv:2210.10047, 2022 - arxiv.org
While large-scale sequence modeling from offline data has led to impressive performance
gains in natural language and image generation, directly translating such ideas to robotics …

Structured world models from human videos

R Mendonca, S Bahl, D Pathak - arXiv preprint arXiv:2308.10901, 2023 - arxiv.org
We tackle the problem of learning complex, general behaviors directly in the real world. We
propose an approach for robots to efficiently learn manipulation skills using only a handful of …

Language-driven representation learning for robotics

S Karamcheti, S Nair, AS Chen, T Kollar, C Finn… - arXiv preprint arXiv …, 2023 - arxiv.org
Recent work in visual representation learning for robotics demonstrates the viability of
learning from large video datasets of humans performing everyday tasks. Leveraging …

One-shot imitation from observing humans via domain-adaptive meta-learning

T Yu, C Finn, A Xie, S Dasari, T Zhang… - arXiv preprint arXiv …, 2018 - arxiv.org
Humans and animals are capable of learning a new behavior by observing others perform
the skill just once. We consider the problem of allowing a robot to do the same--learning …

Feature expansive reward learning: Rethinking human input

A Bobu, M Wiggert, C Tomlin, AD Dragan - Proceedings of the 2021 …, 2021 - dl.acm.org
When a person is not satisfied with how a robot performs a task, they can intervene to correct
it. Reward learning methods enable the robot to adapt its reward function online based on …

Actionable models: Unsupervised offline reinforcement learning of robotic skills

Y Chebotar, K Hausman, Y Lu, T Xiao… - arXiv preprint arXiv …, 2021 - arxiv.org
We consider the problem of learning useful robotic skills from previously collected offline
data without access to manually specified rewards or additional online exploration, a setting …

End-to-end robotic reinforcement learning without reward engineering

A Singh, L Yang, K Hartikainen, C Finn… - arXiv preprint arXiv …, 2019 - arxiv.org
The combination of deep neural network models and reinforcement learning algorithms can
make it possible to learn policies for robotic behaviors that directly read in raw sensory …