3DGUT: Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting
- URL: http://arxiv.org/abs/2412.12507v2
- Date: Mon, 24 Mar 2025 19:39:23 GMT
- Title: 3DGUT: Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting
- Authors: Qi Wu, Janick Martinez Esturo, Ashkan Mirzaei, Nicolas Moenne-Loccoz, Zan Gojcic,
- Abstract summary: We propose 3D Gaussian Unscented Transform (3DGUT), replacing the EWA splatting splatting with the Unscented Transform.<n>This enables the support of distorted time dependent effects such as rolling shutter, while retaining the efficiency of trivialization.
- Score: 15.124165321341646
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 3D Gaussian Splatting (3DGS) enables efficient reconstruction and high-fidelity real-time rendering of complex scenes on consumer hardware. However, due to its rasterization-based formulation, 3DGS is constrained to ideal pinhole cameras and lacks support for secondary lighting effects. Recent methods address these limitations by tracing the particles instead, but, this comes at the cost of significantly slower rendering. In this work, we propose 3D Gaussian Unscented Transform (3DGUT), replacing the EWA splatting formulation with the Unscented Transform that approximates the particles through sigma points, which can be projected exactly under any nonlinear projection function. This modification enables trivial support of distorted cameras with time dependent effects such as rolling shutter, while retaining the efficiency of rasterization. Additionally, we align our rendering formulation with that of tracing-based methods, enabling secondary ray tracing required to represent phenomena such as reflections and refraction within the same 3D representation. The source code is available at: https://github.com/nv-tlabs/3dgrut.
Related papers
- REdiSplats: Ray Tracing for Editable Gaussian Splatting [0.0]
We introduce REdiSplats, which employs ray tracing and a mesh-based representation of flat 3D Gaussians.
In practice, we model the scene using flat Gaussian distributions parameterized by the mesh.
We can render our models using 3D tools such as Blender or Nvdiffrast, which opens the possibility of integrating them with all existing 3D graphics techniques.
arXiv Detail & Related papers (2025-03-15T22:42:21Z) - 3D Convex Splatting: Radiance Field Rendering with 3D Smooth Convexes [87.01284850604495]
We introduce 3D Convexting (3DCS), which leverages 3D smooth convexes as primitives for modeling geometrically-meaningful radiance fields from multiview images.
3DCS achieves superior performance over 3DGS on benchmarks such as MipNeizer, Tanks and Temples, and Deep Blending.
Our results highlight the potential of 3D Convexting to become the new standard for high-quality scene reconstruction.
arXiv Detail & Related papers (2024-11-22T14:31:39Z) - L3DG: Latent 3D Gaussian Diffusion [74.36431175937285]
L3DG is the first approach for generative 3D modeling of 3D Gaussians through a latent 3D Gaussian diffusion formulation.
We employ a sparse convolutional architecture to efficiently operate on room-scale scenes.
By leveraging the 3D Gaussian representation, the generated scenes can be rendered from arbitrary viewpoints in real-time.
arXiv Detail & Related papers (2024-10-17T13:19:32Z) - EVER: Exact Volumetric Ellipsoid Rendering for Real-time View Synthesis [72.53316783628803]
We present Exact Volumetric Ellipsoid Rendering (EVER), a method for real-time differentiable emission-only volume rendering.
Unlike recentization based approach by 3D Gaussian Splatting (3DGS), our primitive based representation allows for exact volume rendering.
We show that our method is more accurate with blending issues than 3DGS and follow-up work on view rendering.
arXiv Detail & Related papers (2024-10-02T17:59:09Z) - EaDeblur-GS: Event assisted 3D Deblur Reconstruction with Gaussian Splatting [8.842593320829785]
Event-assisted 3D Deblur Reconstruction with Gaussian Splatting (EaDeblur-GS) is presented.
It integrates event camera data to enhance the robustness of 3DGS against motion blur.
It achieves sharp 3D reconstructions in real-time, demonstrating performance comparable to state-of-the-art methods.
arXiv Detail & Related papers (2024-07-18T13:55:54Z) - 3D Gaussian Ray Tracing: Fast Tracing of Particle Scenes [50.36933474990516]
This work considers ray tracing the particles, building a bounding volume hierarchy and casting a ray for each pixel using high-performance ray tracing hardware.
To efficiently handle large numbers of semi-transparent particles, we describe a specialized algorithm which encapsulates particles with bounding meshes.
Experiments demonstrate the speed and accuracy of our approach, as well as several applications in computer graphics and vision.
arXiv Detail & Related papers (2024-07-09T17:59:30Z) - 3D-HGS: 3D Half-Gaussian Splatting [5.766096863155448]
Photo-realistic 3D Reconstruction is a fundamental problem in 3D computer vision.
We propose to employ 3D Half-Gaussian (3D-HGS) kernels, which can be used as a plug-and-play kernel.
arXiv Detail & Related papers (2024-06-04T19:04:29Z) - GIR: 3D Gaussian Inverse Rendering for Relightable Scene Factorization [62.13932669494098]
This paper presents a 3D Gaussian Inverse Rendering (GIR) method, employing 3D Gaussian representations to factorize the scene into material properties, light, and geometry.
We compute the normal of each 3D Gaussian using the shortest eigenvector, with a directional masking scheme forcing accurate normal estimation without external supervision.
We adopt an efficient voxel-based indirect illumination tracing scheme that stores direction-aware outgoing radiance in each 3D Gaussian to disentangle secondary illumination for approximating multi-bounce light transport.
arXiv Detail & Related papers (2023-12-08T16:05:15Z) - Mip-Splatting: Alias-free 3D Gaussian Splatting [52.366815964832426]
3D Gaussian Splatting has demonstrated impressive novel view synthesis results, reaching high fidelity and efficiency.
Strong artifacts can be observed when changing the sampling rate, eg, by changing focal length or camera distance.
We find that the source for this phenomenon can be attributed to the lack of 3D frequency constraints and the usage of a 2D dilation filter.
arXiv Detail & Related papers (2023-11-27T13:03:09Z) - Compact 3D Gaussian Representation for Radiance Field [14.729871192785696]
We propose a learnable mask strategy to reduce the number of 3D Gaussian points without sacrificing performance.
We also propose a compact but effective representation of view-dependent color by employing a grid-based neural field.
Our work provides a comprehensive framework for 3D scene representation, achieving high performance, fast training, compactness, and real-time rendering.
arXiv Detail & Related papers (2023-11-22T20:31:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.