Humantomato: Text-aligned whole-body motion generation

S Lu, LH Chen, A Zeng, J Lin, R Zhang, L Zhang… - arXiv preprint arXiv …, 2023 - arxiv.org
This work targets a novel text-driven whole-body motion generation task, which takes a
given textual description as input and aims at generating high-quality, diverse, and coherent …

Disentangled clothed avatar generation from text descriptions

J Wang, Y Liu, Z Dou, Z Yu, Y Liang, C Lin, X Li… - arXiv preprint arXiv …, 2023 - arxiv.org
In this paper, we introduce a novel text-to-avatar generation method that separately
generates the human body and the clothes and allows high-quality animation on the …

Part123: part-aware 3d reconstruction from a single-view image

A Liu, C Lin, Y Liu, X Long, Z Dou, HX Guo… - ACM SIGGRAPH 2024 …, 2024 - dl.acm.org
Recently, the emergence of diffusion models has opened up new opportunities for single-
view reconstruction. However, all the existing methods represent the target object as a …

Synthesizing physically plausible human motions in 3d scenes

L Pan, J Wang, B Huang, J Zhang… - … Conference on 3D …, 2024 - ieeexplore.ieee.org
We present a physics-based character control framework for synthesizing human-scene
interactions. Recent advances adopt physics simulation to mitigate artifacts produced by …

Smoodi: Stylized motion diffusion model

L Zhong, Y Xie, V Jampani, D Sun, H Jiang - European Conference on …, 2025 - Springer
We introduce a novel Stylized Motion Diffusion model, dubbed SMooDi, to generate stylized
motion driven by content texts and style motion sequences. Unlike existing methods that …

PACER+: On-Demand Pedestrian Animation Controller in Driving Scenarios

J Wang, Z Luo, Y Yuan, Y Li… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
We address the challenge of content diversity and controllability in pedestrian simulation for
driving scenarios. Recent pedestrian animation frameworks have a significant limitation …

Interactive character control with auto-regressive motion diffusion models

Y Shi, J Wang, X Jiang, B Lin, B Dai… - ACM Transactions on …, 2024 - dl.acm.org
Real-time character control is an essential component for interactive experiences, with a
broad range of applications, including physics simulations, video games, and virtual reality …

Large motion model for unified multi-modal motion generation

M Zhang, D Jin, C Gu, F Hong, Z Cai, J Huang… - arXiv preprint arXiv …, 2024 - arxiv.org
Human motion generation, a cornerstone technique in animation and video production, has
widespread applications in various tasks like text-to-motion and music-to-dance. Previous …

MotionLLM: Understanding Human Behaviors from Human Motions and Videos

LH Chen, S Lu, A Zeng, H Zhang, B Wang… - arXiv preprint arXiv …, 2024 - arxiv.org
This study delves into the realm of multi-modality (ie, video and motion modalities) human
behavior understanding by leveraging the powerful capabilities of Large Language Models …

HMD: Environment-aware Motion Generation from Single Egocentric Head-Mounted Device

V Guzov, Y Jiang, F Hong, G Pons-Moll… - arXiv preprint arXiv …, 2024 - arxiv.org
This paper investigates the online generation of realistic full-body human motion using a
single head-mounted device with an outward-facing color camera and the ability to perform …