Scenetex: High-quality texture synthesis for indoor scenes via diffusion priors

DZ Chen, H Li, HY Lee, S Tulyakov… - Proceedings of the …, 2024 - openaccess.thecvf.com
We propose SceneTex a novel method for effectively generating high-quality and style-
consistent textures for indoor scenes using depth-to-image diffusion priors. Unlike previous …

Richdreamer: A generalizable normal-depth diffusion model for detail richness in text-to-3d

L Qiu, G Chen, X Gu, Q Zuo, M Xu… - Proceedings of the …, 2024 - openaccess.thecvf.com
Lifting 2D diffusion for 3D generation is a challenging problem due to the lack of geometric
prior and the complex entanglement of materials and lighting in natural images. Existing …

Midas v3. 1--a model zoo for robust monocular relative depth estimation

R Birkl, D Wofk, M Müller - arXiv preprint arXiv:2307.14460, 2023 - arxiv.org
We release MiDaS v3. 1 for monocular depth estimation, offering a variety of new models
based on different encoder backbones. This release is motivated by the success of …

Controlroom3d: Room generation using semantic proxy rooms

J Schult, S Tsai, L Höllein, B Wu… - Proceedings of the …, 2024 - openaccess.thecvf.com
Manually creating 3D environments for AR/VR applications is a complex process requiring
expert knowledge in 3D modeling software. Pioneering works facilitate this process by …

Scenewiz3d: Towards text-guided 3d scene composition

Q Zhang, C Wang, A Siarohin, P Zhuang, Y Xu… - arXiv preprint arXiv …, 2023 - arxiv.org
We are witnessing significant breakthroughs in the technology for generating 3D objects
from text. Existing approaches either leverage large text-to-image models to optimize a 3D …

invs: Repurposing diffusion inpainters for novel view synthesis

Y Kant, A Siarohin, M Vasilkovsky, RA Guler… - SIGGRAPH Asia 2023 …, 2023 - dl.acm.org
In this paper, we present a method for generating consistent novel views from a single
source image. Our approach focuses on maximizing the reuse of visible pixels from the …

Exploiting the signal-leak bias in diffusion models

MN Everaert, A Fitsios, M Bocchio… - Proceedings of the …, 2024 - openaccess.thecvf.com
There is a bias in the inference pipeline of most diffusion models. This bias arises from a
signal leak whose distribution deviates from the noise distribution, creating a discrepancy …

Diffusion priors for dynamic view synthesis from monocular videos

C Wang, P Zhuang, A Siarohin, J Cao, G Qian… - arXiv preprint arXiv …, 2024 - arxiv.org
Dynamic novel view synthesis aims to capture the temporal evolution of visual content within
videos. Existing methods struggle to distinguishing between motion and structure …

Sparsegs: Real-time 360 {\deg} sparse view synthesis using gaussian splatting

H Xiong, S Muttukuru, R Upadhyay, P Chari… - arXiv preprint arXiv …, 2023 - arxiv.org
The problem of novel view synthesis has grown significantly in popularity recently with the
introduction of Neural Radiance Fields (NeRFs) and other implicit scene representation …

SpaceBlender: Creating Context-Rich Collaborative Spaces Through Generative 3D Scene Blending

N Numan, S Rajaram, BT Kumaravel… - Proceedings of the 37th …, 2024 - dl.acm.org
There is increased interest in using generative AI to create 3D spaces for Virtual Reality (VR)
applications. However, today's models produce artificial environments, falling short of …