Generating human motion from textual descriptions with discrete representations

J Zhang, Y Zhang, X Cun, Y Zhang… - Proceedings of the …, 2023 - openaccess.thecvf.com
In this work, we investigate a simple and must-known conditional generative framework
based on Vector Quantised-Variational AutoEncoder (VQ-VAE) and Generative Pre-trained …

Edge: Editable dance generation from music

J Tseng, R Castellon, K Liu - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
Dance is an important human art form, but creating new dances can be difficult and time-
consuming. In this work, we introduce Editable Dance GEneration (EDGE), a state-of-the-art …

Synthesizing diverse human motions in 3d indoor scenes

K Zhao, Y Zhang, S Wang… - Proceedings of the …, 2023 - openaccess.thecvf.com
We present a novel method for populating 3D indoor scenes with virtual humans that can
navigate in the environment and interact with objects in a realistic manner. Existing …

SINC: Spatial composition of 3D human motions for simultaneous action generation

N Athanasiou, M Petrovich… - Proceedings of the …, 2023 - openaccess.thecvf.com
Our goal is to synthesize 3D human motions given textual inputs describing simultaneous
actions, for examplewaving hand'whilewalking'at the same time. We refer to generating such …

[PDF][PDF] Motion In-Betweening via Two-Stage Transformers.

J Qin, Y Zheng, K Zhou - ACM Trans. Graph., 2022 - kunzhou.net
Traditional handcrafted animation often heavily relies on creating keyframes while the in-
betweening is automatically generated through spline-based interpolation. Animators have …

Parco: Part-coordinating text-to-motion synthesis

Q Zou, S Yuan, S Du, Y Wang, C Liu, Y Xu… - … on Computer Vision, 2025 - Springer
We study a challenging task: text-to-motion synthesis, aiming to generate motions that align
with textual descriptions and exhibit coordinated movements. Currently, the part-based …

Tedi: Temporally-entangled diffusion for long-term motion synthesis

Z Zhang, R Liu, R Hanocka, K Aberman - ACM SIGGRAPH 2024 …, 2024 - dl.acm.org
The gradual nature of a diffusion process that synthesizes samples in small increments
constitutes a key ingredient of Denoising Diffusion Probabilistic Models (DDPM), which have …

Hard No-Box Adversarial Attack on Skeleton-Based Human Action Recognition with Skeleton-Motion-Informed Gradient

Z Lu, H Wang, Z Chang, G Yang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Recently, methods for skeleton-based human activity recognition have been shown to be
vulnerable to adversarial attacks. However, these attack methods require either the full …

M2d2m: Multi-motion generation from text with discrete diffusion models

S Chi, H Chi, H Ma, N Agarwal, F Siddiqui… - … on Computer Vision, 2025 - Springer
Abstract We introduce the Multi-Motion Discrete Diffusion Models (M2D2M), a novel
approach for human motion generation from textual descriptions of multiple actions, utilizing …

Pmp: Learning to physically interact with environments using part-wise motion priors

J Bae, J Won, D Lim, CH Min, YM Kim - ACM SIGGRAPH 2023 …, 2023 - dl.acm.org
We present a method to animate a character incorporating multiple part-wise motion priors
(PMP). While previous works allow creating realistic articulated motions from reference data …