Space-time diffusion features for zero-shot text-driven motion transfer

D Yatim, R Fridman, O Bar-Tal… - Proceedings of the …, 2024 - openaccess.thecvf.com
Proceedings of the IEEE/CVF Conference on Computer Vision and …, 2024openaccess.thecvf.com
We present a new method for text-driven motion transfer-synthesizing a video that complies
with an input text prompt describing the target objects and scene while maintaining an input
video's motion and scene layout. Prior methods are confined to transferring motion across
two subjects within the same or closely related object categories and are applicable for
limited domains (eg humans). In this work we consider a significantly more challenging
setting in which the target and source objects differ drastically in shape and fine-grained …
Abstract
We present a new method for text-driven motion transfer-synthesizing a video that complies with an input text prompt describing the target objects and scene while maintaining an input video's motion and scene layout. Prior methods are confined to transferring motion across two subjects within the same or closely related object categories and are applicable for limited domains (eg humans). In this work we consider a significantly more challenging setting in which the target and source objects differ drastically in shape and fine-grained motion characteristics (eg translating a jumping dog into a dolphin). To this end we leverage a pre-trained and fixed text-to-video diffusion model which provides us with generative and motion priors. The pillar of our method is a new space-time feature loss derived directly from the model. This loss guides the generation process to preserve the overall motion of the input video while complying with the target object in terms of shape and fine-grained motion traits.
openaccess.thecvf.com
以上显示的是最相近的搜索结果。 查看全部搜索结果