3d neural field generation using triplane diffusion

JR Shue, ER Chan, R Po, Z Ankner… - Proceedings of the …, 2023 - openaccess.thecvf.com
Diffusion models have emerged as the state-of-the-art for image generation, among other
tasks. Here, we present an efficient diffusion-based model for 3D-aware generation of neural …

3dshape2vecset: A 3d shape representation for neural fields and generative diffusion models

B Zhang, J Tang, M Niessner, P Wonka - ACM Transactions on Graphics …, 2023 - dl.acm.org
We introduce 3DShape2VecSet, a novel shape representation for neural fields designed for
generative diffusion models. Our shape representation can encode 3D shapes given as …

Neuralfield-ldm: Scene generation with hierarchical latent diffusion models

SW Kim, B Brown, K Yin, K Kreis… - Proceedings of the …, 2023 - openaccess.thecvf.com
Automatically generating high-quality real world 3D scenes is of enormous interest for
applications such as virtual reality and robotics simulation. Towards this goal, we introduce …

Mosaic-sdf for 3d generative models

L Yariv, O Puny, O Gafni… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Current diffusion or flow-based generative models for 3D shapes divide to two: distilling pre-
trained 2D image diffusion models and training directly on 3D shapes. When training a …

Ln3diff: Scalable latent neural fields diffusion for speedy 3d generation

Y Lan, F Hong, S Yang, S Zhou, X Meng, B Dai… - … on Computer Vision, 2025 - Springer
The field of neural rendering has witnessed significant progress with advancements in
generative models and differentiable rendering techniques. Though 2D diffusion has …

Deep generative models on 3d representations: A survey

Z Shi, S Peng, Y Xu, A Geiger, Y Liao… - arXiv preprint arXiv …, 2022 - arxiv.org
Generative models aim to learn the distribution of observed data by generating new
instances. With the advent of neural networks, deep generative models, including variational …

Magic123: One image to high-quality 3d object generation using both 2d and 3d diffusion priors

G Qian, J Mai, A Hamdi, J Ren, A Siarohin, B Li… - arXiv preprint arXiv …, 2023 - arxiv.org
We present Magic123, a two-stage coarse-to-fine approach for high-quality, textured 3D
meshes generation from a single unposed image in the wild using both2D and 3D priors. In …

Vq3d: Learning a 3d-aware generative model on imagenet

K Sargent, JY Koh, H Zhang, H Chang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Recent work has shown the possibility of training generative models of 3D content from 2D
image collections on small datasets corresponding to a single object class, such as human …

Dit-3d: Exploring plain diffusion transformers for 3d shape generation

S Mo, E Xie, R Chu, L Hong… - Advances in neural …, 2023 - proceedings.neurips.cc
Abstract Recent Diffusion Transformers (ie, DiT) have demonstrated their powerful
effectiveness in generating high-quality 2D images. However, it is unclear how the …

Cc3d: Layout-conditioned generation of compositional 3d scenes

S Bahmani, JJ Park, D Paschalidou… - Proceedings of the …, 2023 - openaccess.thecvf.com
In this work, we introduce CC3D, a conditional generative model that synthesizes complex
3D scenes conditioned on 2D semantic scene layouts, trained using single-view images …