Make pixels dance: High-dynamic video generation

Y Zeng, G Wei, J Zheng, J Zou, Y Wei… - Proceedings of the …, 2024 - openaccess.thecvf.com
Creating high-dynamic videos such as motion-rich actions and sophisticated visual effects
poses a significant challenge in the field of artificial intelligence. Unfortunately current state-
of-the-art video generation methods primarily focusing on text-to-video generation tend to
produce video clips with minimal motions despite maintaining high fidelity. We argue that
relying solely on text instructions is insufficient and suboptimal for video generation. In this
paper we introduce PixelDance a novel approach based on diffusion models that …

[PDF][PDF] Make Pixels Dance: High-Dynamic Video Generation (Supplementary Material)

Y Zeng, G Wei, J Zheng, JZYWY Zhang, H Li - openaccess.thecvf.com
We enhance the quality of the WebVid-10M dataset. Our process involves filtering out videos
that are almost static by assessing their optical flow values. For this, we employ VideoFlow
[3], a tool capable of estimating bi-directional optical flows across multiple frames. This
approach differs from traditional methods that typically estimate optical flow between just two
frames. Additionally, we exclude videos with an aesthetic score below 3.95, as determined
by the Improved Aesthetic Predictor available on GitHub 1. Following these filtration steps …
以上显示的是最相近的搜索结果。 查看全部搜索结果