Viewcrafter: Taming video diffusion models for high-fidelity novel view synthesis

W Yu, J Xing, L Yuan, W Hu, X Li, Z Huang… - arXiv preprint arXiv …, 2024 - arxiv.org
Despite recent advancements in neural 3D reconstruction, the dependence on dense multi-
view captures restricts their broader applicability. In this work, we propose\textbf …

Bakedavatar: Baking neural fields for real-time head avatar synthesis

HB Duan, M Wang, JC Shi, XC Chen… - ACM Transactions on …, 2023 - dl.acm.org
Synthesizing photorealistic 4D human head avatars from videos is essential for VR/AR,
telepresence, and video game applications. Although existing Neural Radiance Fields …

Portrait4D: Learning One-Shot 4D Head Avatar Synthesis using Synthetic Data

Y Deng, D Wang, X Ren, X Chen… - Proceedings of the …, 2024 - openaccess.thecvf.com
Existing one-shot 4D head synthesis methods usually learn from monocular videos with the
aid of 3DMM reconstruction yet the latter is evenly challenging which restricts them from …

Portrait4d-v2: Pseudo multi-view data creates better 4d head synthesizer

Y Deng, D Wang, B Wang - European Conference on Computer Vision, 2025 - Springer
In this paper, we propose a novel learning approach for feed-forward one-shot 4D head
avatar synthesis. Different from existing methods that often learn from reconstructing …

VOODOO 3D: Volumetric Portrait Disentanglement for One-Shot 3D Head Reenactment

P Tran, E Zakharov, LN Ho, AT Tran… - Proceedings of the …, 2024 - openaccess.thecvf.com
We present a 3D-aware one-shot head reenactment method based on a fully volumetric
neural disentanglement framework for source appearance and driver expressions. Our …

Neural point-based volumetric avatar: Surface-guided neural points for efficient and photorealistic volumetric head avatar

C Wang, D Kang, YP Cao, L Bao, Y Shan… - SIGGRAPH Asia 2023 …, 2023 - dl.acm.org
Rendering photorealistic and dynamically moving human heads is crucial for ensuring a
pleasant and immersive experience in AR/VR and video conferencing applications …

Learning to generate conditional tri-plane for 3d-aware expression controllable portrait animation

T Ki, D Min, G Chae - European Conference on Computer Vision, 2025 - Springer
In this paper, we present\(\text {Export3D}\), a one-shot 3D-aware portrait animation method
that is able to control the facial expression and camera view of a given portrait image. To …

Real3d-portrait: One-shot realistic 3d talking portrait synthesis

Z Ye, T Zhong, Y Ren, J Yang, W Li, J Huang… - arXiv preprint arXiv …, 2024 - arxiv.org
One-shot 3D talking portrait generation aims to reconstruct a 3D avatar from an unseen
image, and then animate it with a reference video or audio to generate a talking portrait …

Tri -plane: Thinking Head Avatar via Feature Pyramid

L Song, P Liu, L Chen, G Yin, C Xu - European Conference on Computer …, 2025 - Springer
Recent years have witnessed considerable achievements in facial avatar reconstruction with
neural volume rendering. Despite notable advancements, the reconstruction of complex and …

GPAvatar: Generalizable and precise head avatar from image (s)

X Chu, Y Li, A Zeng, T Yang, L Lin, Y Liu… - arXiv preprint arXiv …, 2024 - arxiv.org
Head avatar reconstruction, crucial for applications in virtual reality, online meetings,
gaming, and film industries, has garnered substantial attention within the computer vision …