Eye tracking in virtual reality: a broad review of applications and challenges

IB Adhanom, P MacNeilage, E Folmer - Virtual Reality, 2023 - Springer
Eye tracking is becoming increasingly available in head-mounted virtual reality displays with
various headsets with integrated eye trackers already commercially available. The …

Codetalker: Speech-driven 3d facial animation with discrete motion prior

J Xing, M Xia, Y Zhang, X Cun… - Proceedings of the …, 2023 - openaccess.thecvf.com
Speech-driven 3D facial animation has been widely studied, yet there is still a gap to
achieving realism and vividness due to the highly ill-posed nature and scarcity of audio …

Pose-controllable talking face generation by implicitly modularized audio-visual representation

H Zhou, Y Sun, W Wu, CC Loy… - Proceedings of the …, 2021 - openaccess.thecvf.com
While accurate lip synchronization has been achieved for arbitrary-subject audio-driven
talking face generation, the problem of how to efficiently drive the head pose remains …

Avatarrex: Real-time expressive full-body avatars

Z Zheng, X Zhao, H Zhang, B Liu, Y Liu - ACM Transactions on Graphics …, 2023 - dl.acm.org
We present AvatarReX, a new method for learning NeRF-based full-body avatars from video
data. The learnt avatar not only provides expressive control of the body, hands and the face …

Eamm: One-shot emotional talking face via audio-based emotion-aware motion model

X Ji, H Zhou, K Wang, Q Wu, W Wu, F Xu… - ACM SIGGRAPH 2022 …, 2022 - dl.acm.org
Although significant progress has been made to audio-driven talking face generation,
existing methods either neglect facial emotion or cannot be applied to arbitrary subjects. In …

[HTML][HTML] Psychological benefits of using social virtual reality platforms during the covid-19 pandemic: The role of social and spatial presence

M Barreda-Ángeles, T Hartmann - Computers in Human Behavior, 2022 - Elsevier
Social virtual reality (VR) platforms are an emergent phenomenon, with growing numbers of
users utilizing them to connect with others while experiencing feelings of presence (“being …

Meshtalk: 3d face animation from speech using cross-modality disentanglement

A Richard, M Zollhöfer, Y Wen… - Proceedings of the …, 2021 - openaccess.thecvf.com
This paper presents a generic method for generating full facial 3D animation from speech.
Existing approaches to audio-driven facial animation exhibit uncanny or static upper face …

Semantic-aware implicit neural audio-driven video portrait generation

X Liu, Y Xu, Q Wu, H Zhou, W Wu, B Zhou - European conference on …, 2022 - Springer
Animating high-fidelity video portrait with speech audio is crucial for virtual reality and digital
entertainment. While most previous studies rely on accurate explicit structural information …

Levels of naturalism in social neuroscience research

S Fan, O Dal Monte, SWC Chang - IScience, 2021 - cell.com
In order to understand ecologically meaningful social behaviors and their neural substrates
in humans and other animals, researchers have been using a variety of social stimuli in the …

Project starline: A high-fidelity telepresence system

J Lawrence, R Overbeck, T Prives, T Fortes… - ACM SIGGRAPH 2024 …, 2024 - dl.acm.org
Experience Project Starline, the first photorealistic telepresence system that demonstrably
outperforms 2D videoconferencing systems, as measured by participant ratings, meeting …