Aligning cyber space with physical world: A comprehensive survey on embodied ai

Y Liu, W Chen, Y Bai, J Luo, X Song, K Jiang… - arXiv preprint arXiv …, 2024 - arxiv.org
Embodied Artificial Intelligence (Embodied AI) is crucial for achieving Artificial General
Intelligence (AGI) and serves as a foundation for various applications that bridge cyberspace …

Generating visual scenes from touch

F Yang, J Zhang, A Owens - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
An emerging line of work has sought to generate plausible imagery from touch. Existing
approaches, however, tackle only narrow aspects of the visuo-tactile synthesis problem, and …

Tactile-augmented radiance fields

Y Dou, F Yang, Y Liu, A Loquercio… - Proceedings of the …, 2024 - openaccess.thecvf.com
We present a scene representation that brings vision and touch into a shared 3D space
which we call a tactile-augmented radiance field. This representation capitalizes on two key …

Touch2touch: Cross-modal tactile generation for object manipulation

S Rodriguez, Y Dou, M Oller, A Owens… - arXiv preprint arXiv …, 2024 - arxiv.org
Today's touch sensors come in many shapes and sizes. This has made it challenging to
develop general-purpose touch processing methods since models are generally tied to one …

Allsight: A low-cost and high-resolution round tactile sensor with zero-shot learning capability

O Azulay, N Curtis, R Sokolovsky… - IEEE Robotics and …, 2023 - ieeexplore.ieee.org
Tactile sensing is a necessary capability for a robotic hand to perform fine manipulations
and interact with the environment. Optical sensors are a promising solution for high …

Touchsdf: A deepsdf approach for 3d shape reconstruction using vision-based tactile sensing

M Comi, Y Lin, A Church, A Tonioni… - IEEE Robotics and …, 2024 - ieeexplore.ieee.org
Humans rely on their visual and tactile senses to develop a comprehensive 3D
understanding of their physical environment. Recently, there has been a growing interest in …

Deep Generative Models in Robotics: A Survey on Learning from Multimodal Demonstrations

J Urain, A Mandlekar, Y Du, M Shafiullah, D Xu… - arXiv preprint arXiv …, 2024 - arxiv.org
Learning from Demonstrations, the field that proposes to learn robot behavior models from
data, is gaining popularity with the emergence of deep generative models. Although the …

DIFFTACTILE: A Physics-based Differentiable Tactile Simulator for Contact-rich Robotic Manipulation

Z Si, G Zhang, Q Ben, B Romero, Z Xian, C Liu… - arXiv preprint arXiv …, 2024 - arxiv.org
We introduce DIFFTACTILE, a physics-based differentiable tactile simulation system
designed to enhance robotic manipulation with dense and physically accurate tactile …

Composable part-based manipulation

W Liu, J Mao, J Hsu, T Hermans, A Garg… - arXiv preprint arXiv …, 2024 - arxiv.org
In this paper, we propose composable part-based manipulation (CPM), a novel approach
that leverages object-part decomposition and part-part correspondences to improve learning …

Sim2real bilevel adaptation for object surface classification using vision-based tactile sensors

GM Caddeo, A Maracani, PD Alfano… - … on Robotics and …, 2024 - ieeexplore.ieee.org
In this paper, we address the Sim2Real gap in the field of vision-based tactile sensors for
classifying object surfaces. We train a Diffusion Model to bridge this gap using a relatively …