Human motion generation: A survey

W Zhu, X Ma, D Ro, H Ci, J Zhang, J Shi… - … on Pattern Analysis …, 2023 - ieeexplore.ieee.org
Human motion generation aims to generate natural human pose sequences and shows
immense potential for real-world applications. Substantial progress has been made recently …

Single motion diffusion

S Raab, I Leibovitch, G Tevet, M Arar… - arXiv preprint arXiv …, 2023 - arxiv.org
Synthesizing realistic animations of humans, animals, and even imaginary creatures, has
long been a goal for artists and computer graphics professionals. Compared to the imaging …

Modi: Unconditional motion synthesis from diverse data

S Raab, I Leibovitch, P Li, K Aberman… - Proceedings of the …, 2023 - openaccess.thecvf.com
The emergence of neural networks has revolutionized the field of motion synthesis. Yet,
learning to unconditionally synthesize motions from a given distribution remains …

Aigc for various data modalities: A survey

LG Foo, H Rahmani, J Liu - arXiv preprint arXiv:2308.14177, 2023 - arxiv.org
AI-generated content (AIGC) methods aim to produce text, images, videos, 3D assets, and
other media using AI algorithms. Due to its wide range of applications and the demonstrated …

Modiff: Action-conditioned 3d motion generation with denoising diffusion probabilistic models

M Zhao, M Liu, B Ren, S Dai, N Sebe - arXiv preprint arXiv:2301.03949, 2023 - arxiv.org
Diffusion-based generative models have recently emerged as powerful solutions for high-
quality synthesis in multiple domains. Leveraging the bidirectional Markov chains, diffusion …

[PDF][PDF] Ai-generated content (aigc) for various data modalities: A survey

LG Foo, H Rahmani, J Liu - arXiv preprint arXiv:2308.14177, 2023 - researchgate.net
Amidst the rapid advancement of artificial intelligence (AI), the development of content
generation techniques stands out as one of the most captivating and widely discussed topics …

Token boosting for robust self-supervised visual transformer pre-training

T Li, LG Foo, P Hu, X Shang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Learning with large-scale unlabeled data has become a powerful tool for pre-training Visual
Transformers (VTs). However, prior works tend to overlook that, in real-world scenarios, the …

Continuous intermediate token learning with implicit motion manifold for keyframe based motion interpolation

CA Mo, K Hu, C Long, Z Wang - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Deriving sophisticated 3D motions from sparse keyframes is a particularly challenging
problem, due to continuity and exceptionally skeletal precision. The action features are often …

A two-part transformer network for controllable motion synthesis

S Hou, H Tao, H Bao, W Xu - arXiv preprint arXiv:2304.12571, 2023 - arxiv.org
Although part-based motion synthesis networks have been investigated to reduce the
complexity of modeling heterogeneous human motions, their computational cost remains …

Few-shot generative model for skeleton-based human action synthesis using cross-domain adversarial learning

K Fukushi, Y Nozaki, K Nishihara… - Proceedings of the …, 2024 - openaccess.thecvf.com
We propose few-shot generative models of skeleton-based human actions on limited
samples of the target domain. We exploit large public datasets as a source of motion …