Flexible diffusion modeling of long videos

W Harvey, S Naderiparizi, V Masrani… - Advances in …, 2022 - proceedings.neurips.cc
We present a framework for video modeling based on denoising diffusion probabilistic
models that produces long-duration video completions in a variety of realistic environments …

Diffusion probabilistic modeling for video generation

R Yang, P Srivastava, S Mandt - Entropy, 2023 - mdpi.com
Denoising diffusion probabilistic models are a promising new class of generative models
that mark a milestone in high-quality image generation. This paper showcases their ability to …

Video probabilistic diffusion models in projected latent space

S Yu, K Sohn, S Kim, J Shin - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
Despite the remarkable progress in deep generative models, synthesizing high-resolution
and temporally coherent videos still remains a challenge due to their high-dimensionality …

Mcvd-masked conditional video diffusion for prediction, generation, and interpolation

V Voleti, A Jolicoeur-Martineau… - Advances in neural …, 2022 - proceedings.neurips.cc
Video prediction is a challenging task. The quality of video frames from current state-of-the-
art (SOTA) generative models tends to be poor and generalization beyond the training data …

Videoflow: A conditional flow-based model for stochastic video generation

M Kumar, M Babaeizadeh, D Erhan, C Finn… - arXiv preprint arXiv …, 2019 - arxiv.org
Generative models that can model and predict sequences of future events can, in principle,
learn to capture complex real-world phenomena, such as physical interactions. However, a …

[PDF][PDF] Videoflow: A flow-based generative model for video

M Kumar, M Babaeizadeh, D Erhan… - arXiv preprint arXiv …, 2019 - researchgate.net
Generative models that can model and predict sequences of future events can, in principle,
learn to capture complex real-world phenomena, such as physical interactions. In particular …

Preserve your own correlation: A noise prior for video diffusion models

S Ge, S Nah, G Liu, T Poon, A Tao… - Proceedings of the …, 2023 - openaccess.thecvf.com
Despite tremendous progress in generating high-quality images using diffusion models,
synthesizing a sequence of animated frames that are both photorealistic and temporally …

Llm-grounded video diffusion models

L Lian, B Shi, A Yala, T Darrell, B Li - arXiv preprint arXiv:2309.17444, 2023 - arxiv.org
Text-conditioned diffusion models have emerged as a promising tool for neural video
generation. However, current models still struggle with intricate spatiotemporal prompts and …

High fidelity video prediction with large stochastic recurrent neural networks

R Villegas, A Pathak, H Kannan… - Advances in …, 2019 - proceedings.neurips.cc
Predicting future video frames is extremely challenging, as there are many factors of
variation that make up the dynamics of how frames change through time. Previously …

Videofusion: Decomposed diffusion models for high-quality video generation

Z Luo, D Chen, Y Zhang, Y Huang… - 2023 IEEE/CVF …, 2023 - ieeexplore.ieee.org
VideoFusion: Decomposed Diffusion Models for High-Quality Video Generation Page 1
VideoFusion: Decomposed Diffusion Models for High-Quality Video Generation *Zhengxiong …