作者
Simon Alexanderson, Gustav Eje Henter, Taras Kucherenko, Jonas Beskow
发表日期
2020/5
期刊
Computer Graphics Forum
卷号
39
期号
2
页码范围
487-496
简介
Automatic synthesis of realistic gestures promises to transform the fields of animation, avatars and communicative agents. In off‐line applications, novel tools can alter the role of an animator to that of a director, who provides only high‐level input for the desired animation; a learned network then translates these instructions into an appropriate sequence of body poses. In interactive scenarios, systems for generating natural animations on the fly are key to achieving believable and relatable characters. In this paper we address some of the core issues towards these ends. By adapting a deep learning‐based motion synthesis method called MoGlow, we propose a new generative model for generating state‐of‐the‐art realistic speech‐driven gesticulation. Owing to the probabilistic nature of the approach, our model can produce a battery of different, yet plausible, gestures given the same input speech signal. Just like …
引用总数
学术搜索中的文章
S Alexanderson, É Székely - Generating coherent spontaneous speech and gesture …, 2021
S Alexanderson - Taras Kucherenko, and Jonas Beskow (2020a).“Style …, 2020
S Alexanderson, GE Henter, T Kucherenko, J Beskow - 2020