VRGaussianAvatar: Integrating 3D Gaussian Avatars into VR
- URL: http://arxiv.org/abs/2602.01674v1
- Date: Mon, 02 Feb 2026 05:42:40 GMT
- Title: VRGaussianAvatar: Integrating 3D Gaussian Avatars into VR
- Authors: Hail Song, Boram Yoon, Seokhwan Yang, Seoyoung Kang, Hyunjeong Kim, Henning Metzmacher, Woontack Woo,
- Abstract summary: VRGaussianAvatar enables real-time full-body 3D Splatting (3DGS) avatars using only head-mounted display (HMD) signals.<n>VR Frontend uses inverse kinematics to estimate full-body pose and streams the resulting pose along with stereo camera parameters to the backend.<n>GA Backend stereoscopically renders a 3DGS avatar reconstructed from a single image.
- Score: 10.447124865294017
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We present VRGaussianAvatar, an integrated system that enables real-time full-body 3D Gaussian Splatting (3DGS) avatars in virtual reality using only head-mounted display (HMD) tracking signals. The system adopts a parallel pipeline with a VR Frontend and a GA Backend. The VR Frontend uses inverse kinematics to estimate full-body pose and streams the resulting pose along with stereo camera parameters to the backend. The GA Backend stereoscopically renders a 3DGS avatar reconstructed from a single image. To improve stereo rendering efficiency, we introduce Binocular Batching, which jointly processes left and right eye views in a single batched pass to reduce redundant computation and support high-resolution VR displays. We evaluate VRGaussianAvatar with quantitative performance tests and a within-subject user study against image- and video-based mesh avatar baselines. Results show that VRGaussianAvatar sustains interactive VR performance and yields higher perceived appearance similarity, embodiment, and plausibility. Project page and source code are available at https://vrgaussianavatar.github.io.
Related papers
- FlexAvatar: Learning Complete 3D Head Avatars with Partial Supervision [54.69512425050288]
We introduce FlexAvatar, a method for creating high-quality and complete 3D head avatars from a single image.<n>Our training procedure yields a smooth latent avatar space that facilitates identity and flexible fitting to an arbitrary number of input observations.
arXiv Detail & Related papers (2025-12-17T17:09:52Z) - Visionary: The World Model Carrier Built on WebGPU-Powered Gaussian Splatting Platform [104.39464309969253]
We present Visionary, an open, web-native platform for real-time various Gaussian Splatting and rendering.<n> Visionary enables dynamic neural processing while maintaining a lightweight, "click-to-run" browser experience.
arXiv Detail & Related papers (2025-12-09T10:54:58Z) - AGORA: Adversarial Generation Of Real-time Animatable 3D Gaussian Head Avatars [54.854597811704316]
AGORA is a novel framework that extends 3DGS within a generative adversarial network to produce animatable avatars.<n>Expression fidelity is enforced via a dual-discriminator training scheme.<n>AGORA generates avatars that are not only visually realistic but also precisely controllable.
arXiv Detail & Related papers (2025-12-06T14:05:20Z) - GSAC: Leveraging Gaussian Splatting for Photorealistic Avatar Creation with Unity Integration [45.439388725485124]
Photorealistic avatars are essential for immersive applications in virtual reality (VR) and augmented reality (AR), enabling lifelike interactions in areas such as training simulations, telemedicine, and virtual collaboration.<n>Existing avatar creation techniques face significant challenges, including high costs, long creation times, and limited utility in virtual applications.<n>We introduce an end-to-end 3D Gaussian Splatting (3DGS) avatar creation pipeline that leverages monocular video input to create a scalable and efficient photorealistic avatar.
arXiv Detail & Related papers (2025-04-17T15:10:14Z) - Avat3r: Large Animatable Gaussian Reconstruction Model for High-fidelity 3D Head Avatars [60.0866477932976]
We present Avat3r, which regresses a high-quality and animatable 3D head avatar from just a few input images.<n>We make Large Reconstruction Models animatable and learn a powerful prior over 3D human heads from a large multi-view video dataset.<n>We increase robustness by feeding input images with different expressions to our model during training, enabling the reconstruction of 3D head avatars from inconsistent inputs.
arXiv Detail & Related papers (2025-02-27T16:00:11Z) - GaussRender: Learning 3D Occupancy with Gaussian Rendering [86.89653628311565]
GaussRender is a module that improves 3D occupancy learning by enforcing projective consistency.<n>Our method penalizes 3D configurations that produce inconsistent 2D projections, thereby enforcing a more coherent 3D structure.
arXiv Detail & Related papers (2025-02-07T16:07:51Z) - SqueezeMe: Mobile-Ready Distillation of Gaussian Full-Body Avatars [19.249226899376943]
We present SqueezeMe, a framework to convert high-fidelity 3D Gaussian full-body avatars into a lightweight representation.<n>We achieve, for the first time, simultaneous animation and rendering of 3 Gaussian avatars in real-time (72 FPS) on a Meta Quest 3 VR headset.
arXiv Detail & Related papers (2024-12-19T18:46:55Z) - VR-Splatting: Foveated Radiance Field Rendering via 3D Gaussian Splatting and Neural Points [4.962171160815189]
We propose a novel hybrid approach that combines the strengths of both point rendering directions regarding performance sweet spots.<n>For the fovea only, we use neural points with a convolutional neural network for the small pixel footprint, which provides sharp, detailed output.<n>Our evaluation confirms that our approach increases sharpness and details compared to a standard VR-ready 3DGS configuration.
arXiv Detail & Related papers (2024-10-23T14:54:48Z) - Generalizable and Animatable Gaussian Head Avatar [50.34788590904843]
We propose Generalizable and Animatable Gaussian head Avatar (GAGAvatar) for one-shot animatable head avatar reconstruction.
We generate the parameters of 3D Gaussians from a single image in a single forward pass.
Our method exhibits superior performance compared to previous methods in terms of reconstruction quality and expression accuracy.
arXiv Detail & Related papers (2024-10-10T14:29:00Z) - NPGA: Neural Parametric Gaussian Avatars [46.52887358194364]
We propose a data-driven approach to create high-fidelity controllable avatars from multi-view video recordings.
We build our method around 3D Gaussian splatting for its highly efficient rendering and to inherit the topological flexibility of point clouds.
We evaluate our method on the public NeRSemble dataset, demonstrating that NPGA significantly outperforms the previous state-of-the-art avatars on the self-reenactment task by 2.6 PSNR.
arXiv Detail & Related papers (2024-05-29T17:58:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.