Diffusion-based generation, optimization, and planning in 3d scenes

S Huang, Z Wang, P Li, B Jia, T Liu… - Proceedings of the …, 2023 - openaccess.thecvf.com
We introduce SceneDiffuser, a conditional generative model for 3D scene understanding.
SceneDiffuser provides a unified model for solving scene-conditioned generation …

🏘️ ProcTHOR: Large-Scale Embodied AI Using Procedural Generation

M Deitke, E VanderBilt, A Herrasti… - Advances in …, 2022 - proceedings.neurips.cc
Massive datasets and high-capacity models have driven many recent advancements in
computer vision and natural language understanding. This work presents a platform to …

Behavior-1k: A benchmark for embodied ai with 1,000 everyday activities and realistic simulation

C Li, R Zhang, J Wong, C Gokmen… - … on Robot Learning, 2023 - proceedings.mlr.press
We present BEHAVIOR-1K, a comprehensive simulation benchmark for human-centered
robotics. BEHAVIOR-1K includes two components, guided and motivated by the results of an …

Bridgedata v2: A dataset for robot learning at scale

HR Walke, K Black, TZ Zhao, Q Vuong… - … on Robot Learning, 2023 - proceedings.mlr.press
We introduce BridgeData V2, a large and diverse dataset of robotic manipulation behaviors
designed to facilitate research in scalable robot learning. BridgeData V2 contains 53,896 …

Unidexgrasp: Universal robotic dexterous grasping via learning diverse proposal generation and goal-conditioned policy

Y Xu, W Wan, J Zhang, H Liu, Z Shan… - Proceedings of the …, 2023 - openaccess.thecvf.com
In this work, we tackle the problem of learning universal robotic dexterous grasping from a
point cloud observation under a table-top setting. The goal is to grasp and lift up objects in …

Hoi4d: A 4d egocentric dataset for category-level human-object interaction

Y Liu, Y Liu, C Jiang, K Lyu, W Wan… - Proceedings of the …, 2022 - openaccess.thecvf.com
We present HOI4D, a large-scale 4D egocentric dataset with rich annotations, to catalyze the
research of category-level human-object interaction. HOI4D consists of 2.4 M RGB-D …

Unidexgrasp++: Improving dexterous grasping policy learning via geometry-aware curriculum and iterative generalist-specialist learning

W Wan, H Geng, Y Liu, Z Shan… - Proceedings of the …, 2023 - openaccess.thecvf.com
We propose a novel, object-agnostic method for learning a universal policy for dexterous
object grasping from realistic point cloud observations and proprioceptive information under …

Gapartnet: Cross-category domain-generalizable object perception and manipulation via generalizable and actionable parts

H Geng, H Xu, C Zhao, C Xu, L Yi… - Proceedings of the …, 2023 - openaccess.thecvf.com
For years, researchers have been devoted to generalizable object perception and
manipulation, where cross-category generalizability is highly desired yet underexplored. In …

Neural volumetric memory for visual locomotion control

R Yang, G Yang, X Wang - … of the IEEE/CVF Conference on …, 2023 - openaccess.thecvf.com
Legged robots have the potential to expand the reach of autonomy beyond paved roads. In
this work, we consider the difficult problem of locomotion on challenging terrains using a …

Maniskill2: A unified benchmark for generalizable manipulation skills

J Gu, F Xiang, X Li, Z Ling, X Liu, T Mu, Y Tang… - arXiv preprint arXiv …, 2023 - arxiv.org
Generalizable manipulation skills, which can be composed to tackle long-horizon and
complex daily chores, are one of the cornerstones of Embodied AI. However, existing …