Hexplane: A fast representation for dynamic scenes

A Cao, J Johnson - … of the IEEE/CVF Conference on …, 2023 - openaccess.thecvf.com
Modeling and re-rendering dynamic 3D scenes is a challenging task in 3D vision. Prior
approaches build on NeRF and rely on implicit representations. This is slow since it requires …

Sparsefusion: Distilling view-conditioned diffusion for 3d reconstruction

Z Zhou, S Tulsiani - … of the IEEE/CVF Conference on …, 2023 - openaccess.thecvf.com
We propose SparseFusion, a sparse view 3D reconstruction approach that unifies recent
advances in neural rendering and probabilistic image generation. Existing approaches …

Splatter image: Ultra-fast single-view 3d reconstruction

S Szymanowicz, C Rupprecht… - Proceedings of the …, 2024 - openaccess.thecvf.com
Abstract We introduce the Splatter Image an ultra-efficient approach for monocular 3D object
reconstruction. Splatter Image is based on Gaussian Splatting which allows fast and high …

Gps-gaussian: Generalizable pixel-wise 3d gaussian splatting for real-time human novel view synthesis

S Zheng, B Zhou, R Shao, B Liu… - Proceedings of the …, 2024 - openaccess.thecvf.com
We present a new approach termed GPS-Gaussian for synthesizing novel views of a
character in a real-time manner. The proposed method enables 2K-resolution rendering …

Vision transformer for nerf-based view synthesis from a single input image

KE Lin, YC Lin, WS Lai, TY Lin… - Proceedings of the …, 2023 - openaccess.thecvf.com
Although neural radiance fields (NeRF) have shown impressive advances in novel view
synthesis, most methods require multiple input images of the same scene with accurate …

Sherf: Generalizable human nerf from a single image

S Hu, F Hong, L Pan, H Mei… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract Existing Human NeRF methods for reconstructing 3D humans typically rely on
multiple 2D images from multi-view cameras or monocular videos captured from fixed …

Real-time neural rasterization for large scenes

JY Liu, Y Chen, Z Yang, J Wang… - Proceedings of the …, 2023 - openaccess.thecvf.com
We propose a new method for realistic real-time novel-view synthesis (NVS) of large scenes.
Existing fast neural rendering methods generate realistic results, but primarily work for small …

High-fidelity 3d gan inversion by pseudo-multi-view optimization

J Xie, H Ouyang, J Piao, C Lei… - Proceedings of the …, 2023 - openaccess.thecvf.com
We present a high-fidelity 3D generative adversarial network (GAN) inversion framework
that can synthesize photo-realistic novel views while preserving specific details of the input …

Real-time radiance fields for single-image portrait view synthesis

A Trevithick, M Chan, M Stengel, E Chan… - ACM Transactions on …, 2023 - dl.acm.org
We present a one-shot method to infer and render a photorealistic 3D representation from a
single unposed image (eg, face portrait) in real-time. Given a single RGB input, our image …

Affordancellm: Grounding affordance from vision language models

S Qian, W Chen, M Bai, X Zhou… - Proceedings of the …, 2024 - openaccess.thecvf.com
Affordance grounding refers to the task of finding the area of an object with which one can
interact. It is a fundamental but challenging task as a successful solution requires the …