3D Gaussian Splatting for Real-Time Radiance Field Rendering
- URL: http://arxiv.org/abs/2308.04079v1
- Date: Tue, 8 Aug 2023 06:37:06 GMT
- Title: 3D Gaussian Splatting for Real-Time Radiance Field Rendering
- Authors: Bernhard Kerbl, Georgios Kopanas, Thomas Leimk\"uhler, George
Drettakis
- Abstract summary: We introduce three key elements that allow us to achieve state-of-the-art visual quality while maintaining competitive training times.
We demonstrate state-of-the-art visual quality and real-time rendering on several established datasets.
- Score: 4.320393382724066
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Radiance Field methods have recently revolutionized novel-view synthesis of
scenes captured with multiple photos or videos. However, achieving high visual
quality still requires neural networks that are costly to train and render,
while recent faster methods inevitably trade off speed for quality. For
unbounded and complete scenes (rather than isolated objects) and 1080p
resolution rendering, no current method can achieve real-time display rates. We
introduce three key elements that allow us to achieve state-of-the-art visual
quality while maintaining competitive training times and importantly allow
high-quality real-time (>= 30 fps) novel-view synthesis at 1080p resolution.
First, starting from sparse points produced during camera calibration, we
represent the scene with 3D Gaussians that preserve desirable properties of
continuous volumetric radiance fields for scene optimization while avoiding
unnecessary computation in empty space; Second, we perform interleaved
optimization/density control of the 3D Gaussians, notably optimizing
anisotropic covariance to achieve an accurate representation of the scene;
Third, we develop a fast visibility-aware rendering algorithm that supports
anisotropic splatting and both accelerates training and allows realtime
rendering. We demonstrate state-of-the-art visual quality and real-time
rendering on several established datasets.
Related papers
- RadSplat: Radiance Field-Informed Gaussian Splatting for Robust Real-Time Rendering with 900+ FPS [47.47003067842151]
We present RadSplat, a lightweight method for robust real-time rendering of complex scenes.
First, we use radiance fields as a prior and supervision signal for optimizing point-based scene representations, leading to improved quality and more robust optimization.
Next, we develop a novel pruning technique reducing the overall point count while maintaining high quality, leading to smaller and more compact scene representations with faster inference speeds.
arXiv Detail & Related papers (2024-03-20T17:59:55Z) - VastGaussian: Vast 3D Gaussians for Large Scene Reconstruction [59.40711222096875]
We present VastGaussian, the first method for high-quality reconstruction and real-time rendering on large scenes based on 3D Gaussian Splatting.
Our approach outperforms existing NeRF-based methods and achieves state-of-the-art results on multiple large scene datasets.
arXiv Detail & Related papers (2024-02-27T11:40:50Z) - Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering [71.44349029439944]
Recent 3D Gaussian Splatting method has achieved the state-of-the-art rendering quality and speed.
We introduce Scaffold-GS, which uses anchor points to distribute local 3D Gaussians.
We show that our method effectively reduces redundant Gaussians while delivering high-quality rendering.
arXiv Detail & Related papers (2023-11-30T17:58:57Z) - VoxNeRF: Bridging Voxel Representation and Neural Radiance Fields for
Enhanced Indoor View Synthesis [51.49008959209671]
We introduce VoxNeRF, a novel approach that leverages volumetric representations to enhance the quality and efficiency of indoor view synthesis.
We employ multi-resolution hash grids to adaptively capture spatial features, effectively managing occlusions and the intricate geometry of indoor scenes.
We validate our approach against three public indoor datasets and demonstrate that VoxNeRF outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-11-09T11:32:49Z) - Deformable 3D Gaussians for High-Fidelity Monocular Dynamic Scene
Reconstruction [29.83056271799794]
Implicit neural representation has paved the way for new approaches to dynamic scene reconstruction and rendering.
We propose a deformable 3D Gaussians Splatting method that reconstructs scenes using 3D Gaussians and learns them in canonical space.
Through a differential Gaussianizer, the deformable 3D Gaussians not only achieve higher rendering quality but also real-time rendering speed.
arXiv Detail & Related papers (2023-09-22T16:04:02Z) - Real-Time Radiance Fields for Single-Image Portrait View Synthesis [85.32826349697972]
We present a one-shot method to infer and render a 3D representation from a single unposed image in real-time.
Given a single RGB input, our image encoder directly predicts a canonical triplane representation of a neural radiance field for 3D-aware novel view synthesis via volume rendering.
Our method is fast (24 fps) on consumer hardware, and produces higher quality results than strong GAN-inversion baselines that require test-time optimization.
arXiv Detail & Related papers (2023-05-03T17:56:01Z) - NeRFPlayer: A Streamable Dynamic Scene Representation with Decomposed
Neural Radiance Fields [99.57774680640581]
We present an efficient framework capable of fast reconstruction, compact modeling, and streamable rendering.
We propose to decompose the 4D space according to temporal characteristics. Points in the 4D space are associated with probabilities belonging to three categories: static, deforming, and new areas.
arXiv Detail & Related papers (2022-10-28T07:11:05Z) - Foveated Neural Radiance Fields for Real-Time and Egocentric Virtual
Reality [11.969281058344581]
High-quality 3D graphics requires large volumes of fine-detailed scene data for rendering.
Recent approaches to combat this problem include remote rendering/streaming and neural representations of 3D assets.
We present the first gaze-contingent 3D neural representation and view synthesis method.
arXiv Detail & Related papers (2021-03-30T14:05:47Z) - Neural Lumigraph Rendering [33.676795978166375]
State-of-the-art (SOTA) neural volume rendering approaches are slow to train and require minutes of inference (i.e., rendering) time for high image resolutions.
We adopt high-capacity neural scene representations with periodic activations for jointly optimizing an implicit surface and a radiance field of a scene supervised exclusively with posed 2D images.
Our neural rendering pipeline accelerates SOTA neural volume rendering by about two orders of magnitude and our implicit surface representation is unique in allowing us to export a mesh with view-dependent texture information.
arXiv Detail & Related papers (2021-03-22T03:46:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.