Z Xing, Q Feng, H Chen, Q Dai, H Hu, H Xu… - arXiv preprint arXiv …, 2023 - arxiv.org
The recent wave of AI-generated content (AIGC) has witnessed substantial success in computer vision, with the diffusion model playing a crucial role in this achievement. Due to …
Text-to-Image diffusion models have made tremendous progress over the past two years, enabling the generation of highly realistic images based on open-domain text descriptions …
This paper introduces ModelScopeT2V, a text-to-video synthesis model that evolves from a text-to-image synthesis model (ie, Stable Diffusion). ModelScopeT2V incorporates spatio …
L Hu - Proceedings of the IEEE/CVF Conference on …, 2024 - openaccess.thecvf.com
Character Animation aims to generating character videos from still images through driving signals. Currently diffusion models have become the mainstream in visual generation …
Z Xing, Q Dai, H Hu, Z Wu… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
The recent wave of AI-generated content has witnessed the great development and success of Text-to-Image (T2I) technologies. By contrast Text-to-Video (T2V) still falls short of …
Z Huang, Y He, J Yu, F Zhang, C Si… - Proceedings of the …, 2024 - openaccess.thecvf.com
Video generation has witnessed significant advancements yet evaluating these models remains a challenge. A comprehensive evaluation benchmark for video generation is …
Recent breakthroughs in text-to-4D generation rely on pre-trained text-to-image and text-to- video models to generate dynamic 3D scenes. However current text-to-4D methods face a …
Z Li, R Tucker, N Snavely… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
We present an approach to modeling an image-space prior on scene motion. Our prior is learned from a collection of motion trajectories extracted from real video sequences …