Open x-embodiment: Robotic learning datasets and rt-x models A O'Neill, A Rehman, A Gupta, A Maddukuri, A Gupta, A Padalkar, A Lee, ... arXiv preprint arXiv:2310.08864, 2023 | 281 | 2023 |
OpenVLA: An Open-Source Vision-Language-Action Model MJ Kim, K Pertsch, S Karamcheti, T Xiao, A Balakrishna, S Nair, ... arXiv preprint arXiv:2406.09246, 2024 | 101 | 2024 |
Bridgedata v2: A dataset for robot learning at scale HR Walke, K Black, TZ Zhao, Q Vuong, C Zheng, P Hansen-Estruch, ... Conference on Robot Learning, 1723-1736, 2023 | 89 | 2023 |
Vision-Based Manipulators Need to Also See from Their Hands K Hsu*, MJ Kim*, R Rafailov, J Wu, C Finn International Conference on Learning Representations (ICLR), 2022 | 38 | 2022 |
Nerf in the palm of your hand: Corrective augmentation for robotics via novel-view synthesis A Zhou, MJ Kim, L Wang, P Florence, C Finn Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023 | 36 | 2023 |
Giving robots a hand: Learning generalizable manipulation with eye-in-hand human video demonstrations MJ Kim, J Wu, C Finn arXiv preprint arXiv:2307.05959, 2023 | 7 | 2023 |