VR-Splatting: Foveated Radiance Field Rendering via 3D Gaussian Splatting and Neural Points
- URL: http://arxiv.org/abs/2410.17932v1
- Date: Wed, 23 Oct 2024 14:54:48 GMT
- Title: VR-Splatting: Foveated Radiance Field Rendering via 3D Gaussian Splatting and Neural Points
- Authors: Linus Franke, Laura Fink, Marc Stamminger,
- Abstract summary: High-performance demands of virtual reality systems present challenges in utilizing fast-to-render scene representations like 3DGS.
We propose foveated rendering as a promising solution to these obstacles.
Our approach introduces a novel foveated rendering approach for Virtual Reality, that leverages the sharp, detailed output of neural point rendering for the foveal region, fused with a smooth rendering of 3DGS for the peripheral vision.
- Score: 4.962171160815189
- License:
- Abstract: Recent advances in novel view synthesis (NVS), particularly neural radiance fields (NeRF) and Gaussian splatting (3DGS), have demonstrated impressive results in photorealistic scene rendering. These techniques hold great potential for applications in virtual tourism and teleportation, where immersive realism is crucial. However, the high-performance demands of virtual reality (VR) systems present challenges in directly utilizing even such fast-to-render scene representations like 3DGS due to latency and computational constraints. In this paper, we propose foveated rendering as a promising solution to these obstacles. We analyze state-of-the-art NVS methods with respect to their rendering performance and compatibility with the human visual system. Our approach introduces a novel foveated rendering approach for Virtual Reality, that leverages the sharp, detailed output of neural point rendering for the foveal region, fused with a smooth rendering of 3DGS for the peripheral vision. Our evaluation confirms that perceived sharpness and detail-richness are increased by our approach compared to a standard VR-ready 3DGS configuration. Our system meets the necessary performance requirements for real-time VR interactions, ultimately enhancing the user's immersive experience. Project page: https://lfranke.github.io/vr_splatting
Related papers
- 3D Convex Splatting: Radiance Field Rendering with 3D Smooth Convexes [87.01284850604495]
We introduce 3D Convexting (3DCS), which leverages 3D smooth convexes as primitives for modeling geometrically-meaningful radiance fields from multiview images.
3DCS achieves superior performance over 3DGS on benchmarks such as MipNeizer, Tanks and Temples, and Deep Blending.
Our results highlight the potential of 3D Convexting to become the new standard for high-quality scene reconstruction.
arXiv Detail & Related papers (2024-11-22T14:31:39Z) - OccGaussian: 3D Gaussian Splatting for Occluded Human Rendering [55.50438181721271]
Previous method utilizing NeRF for surface rendering to recover the occluded areas requires more than one day to train and several seconds to render occluded areas.
We propose OccGaussian based on 3D Gaussian Splatting, which can be trained within 6 minutes and produces high-quality human renderings up to 160 FPS with occluded input.
arXiv Detail & Related papers (2024-04-12T13:00:06Z) - VR-GS: A Physical Dynamics-Aware Interactive Gaussian Splatting System in Virtual Reality [39.53150683721031]
Our proposed VR-GS system represents a leap forward in human-centered 3D content interaction.
The components of our Virtual Reality system are designed for high efficiency and effectiveness.
arXiv Detail & Related papers (2024-01-30T01:28:36Z) - ASH: Animatable Gaussian Splats for Efficient and Photoreal Human Rendering [62.81677824868519]
We propose an animatable Gaussian splatting approach for photorealistic rendering of dynamic humans in real-time.
We parameterize the clothed human as animatable 3D Gaussians, which can be efficiently splatted into image space to generate the final rendering.
We benchmark ASH with competing methods on pose-controllable avatars, demonstrating that our method outperforms existing real-time methods by a large margin and shows comparable or even better results than offline methods.
arXiv Detail & Related papers (2023-12-10T17:07:37Z) - Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering [71.44349029439944]
Recent 3D Gaussian Splatting method has achieved the state-of-the-art rendering quality and speed.
We introduce Scaffold-GS, which uses anchor points to distribute local 3D Gaussians.
We show that our method effectively reduces redundant Gaussians while delivering high-quality rendering.
arXiv Detail & Related papers (2023-11-30T17:58:57Z) - VR-NeRF: High-Fidelity Virtualized Walkable Spaces [55.51127858816994]
We present an end-to-end system for the high-fidelity capture, model reconstruction, and real-time rendering of walkable spaces in virtual reality using neural radiance fields.
arXiv Detail & Related papers (2023-11-05T02:03:14Z) - ABLE-NeRF: Attention-Based Rendering with Learnable Embeddings for
Neural Radiance Field [20.986012773294714]
We present an alternative to the physics-based VR approach by introducing a self-attention-based framework on volumes along a ray.
Our method, which we call ABLE-NeRF, significantly reduces blurry' glossy surfaces in rendering and produces realistic translucent surfaces which lack in prior art.
arXiv Detail & Related papers (2023-03-24T05:34:39Z) - Immersive Neural Graphics Primitives [13.48024951446282]
We present and evaluate a NeRF-based framework that is capable of rendering scenes in immersive VR.
Our approach can yield a frame rate of 30 frames per second with a resolution of 1280x720 pixels per eye.
arXiv Detail & Related papers (2022-11-24T09:33:38Z) - NeuVV: Neural Volumetric Videos with Immersive Rendering and Editing [34.40837543752915]
We present a neural volumography technique called neural volumetric video or NeuVV to support immersive, interactive, and spatial-temporal rendering.
NeuVV encodes a dynamic neural radiance field (NeRF) into renderable and editable primitives.
We further develop a hybrid neural-rasterization rendering framework to support consumer-level VR headsets.
arXiv Detail & Related papers (2022-02-12T15:23:16Z) - HVTR: Hybrid Volumetric-Textural Rendering for Human Avatars [65.82222842213577]
We propose a novel neural rendering pipeline, which synthesizes virtual human avatars from arbitrary poses efficiently and at high quality.
First, we learn to encode articulated human motions on a dense UV manifold of the human body surface.
We then leverage the encoded information on the UV manifold to construct a 3D volumetric representation.
arXiv Detail & Related papers (2021-12-19T17:34:15Z) - Foveated Neural Radiance Fields for Real-Time and Egocentric Virtual
Reality [11.969281058344581]
High-quality 3D graphics requires large volumes of fine-detailed scene data for rendering.
Recent approaches to combat this problem include remote rendering/streaming and neural representations of 3D assets.
We present the first gaze-contingent 3D neural representation and view synthesis method.
arXiv Detail & Related papers (2021-03-30T14:05:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.