Speech-driven facial animation using polynomial fusion of features

T Kefalas, K Vougioukas, Y Panagakis… - ICASSP 2020-2020 …, 2020 - ieeexplore.ieee.org
ICASSP 2020-2020 IEEE International Conference on Acoustics …, 2020ieeexplore.ieee.org
Speech-driven facial animation involves using a speech signal to generate realistic videos
of talking faces. Recent deep learning approaches to facial synthesis rely on extracting low-
dimensional representations and concatenating them, followed by a decoding step of the
concatenated vector. This accounts for only first-order interactions of the features and
ignores higher-order interactions. In this paper we propose a polynomial fusion layer that
models the joint representation of the encodings by a higher-order polynomial, with the …
Speech-driven facial animation involves using a speech signal to generate realistic videos of talking faces. Recent deep learning approaches to facial synthesis rely on extracting low-dimensional representations and concatenating them, followed by a decoding step of the concatenated vector. This accounts for only first-order interactions of the features and ignores higher-order interactions. In this paper we propose a polynomial fusion layer that models the joint representation of the encodings by a higher-order polynomial, with the parameters modelled by a tensor decomposition. We demonstrate the suitability of this approach through experiments on generated videos evaluated on a range of metrics on video quality, audiovisual synchronisation and generation of blinks.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果