ODGS: 3D Scene Reconstruction from Omnidirectional Images with 3D Gaussian Splattings
- URL: http://arxiv.org/abs/2410.20686v1
- Date: Mon, 28 Oct 2024 02:45:13 GMT
- Title: ODGS: 3D Scene Reconstruction from Omnidirectional Images with 3D Gaussian Splattings
- Authors: Suyoung Lee, Jaeyoung Chung, Jaeyoo Huh, Kyoung Mu Lee,
- Abstract summary: We present ODGS, a novelization pipeline for omnidirectional images, with geometric interpretation.
The entire pipeline is parallelized using, achieving optimization and speeds 100 times faster than NeRF-based methods.
Results show ODGS restores fine details effectively, even when reconstructing large 3D scenes.
- Score: 48.72040500647568
- License:
- Abstract: Omnidirectional (or 360-degree) images are increasingly being used for 3D applications since they allow the rendering of an entire scene with a single image. Existing works based on neural radiance fields demonstrate successful 3D reconstruction quality on egocentric videos, yet they suffer from long training and rendering times. Recently, 3D Gaussian splatting has gained attention for its fast optimization and real-time rendering. However, directly using a perspective rasterizer to omnidirectional images results in severe distortion due to the different optical properties between two image domains. In this work, we present ODGS, a novel rasterization pipeline for omnidirectional images, with geometric interpretation. For each Gaussian, we define a tangent plane that touches the unit sphere and is perpendicular to the ray headed toward the Gaussian center. We then leverage a perspective camera rasterizer to project the Gaussian onto the corresponding tangent plane. The projected Gaussians are transformed and combined into the omnidirectional image, finalizing the omnidirectional rasterization process. This interpretation reveals the implicit assumptions within the proposed pipeline, which we verify through mathematical proofs. The entire rasterization process is parallelized using CUDA, achieving optimization and rendering speeds 100 times faster than NeRF-based methods. Our comprehensive experiments highlight the superiority of ODGS by delivering the best reconstruction and perceptual quality across various datasets. Additionally, results on roaming datasets demonstrate that ODGS restores fine details effectively, even when reconstructing large 3D scenes. The source code is available on our project page (https://github.com/esw0116/ODGS).
Related papers
- EVER: Exact Volumetric Ellipsoid Rendering for Real-time View Synthesis [72.53316783628803]
We present Exact Volumetric Ellipsoid Rendering (EVER), a method for real-time differentiable emission-only volume rendering.
Unlike recentization based approach by 3D Gaussian Splatting (3DGS), our primitive based representation allows for exact volume rendering.
We show that our method is more accurate with blending issues than 3DGS and follow-up work on view rendering.
arXiv Detail & Related papers (2024-10-02T17:59:09Z) - OmniGS: Fast Radiance Field Reconstruction using Omnidirectional Gaussian Splatting [27.543561055868697]
Current 3D Gaussian Splatting system only supports radiance field reconstruction using undistorted perspective images.
We present OmniGS, a novel omnidirectional Gaussian splatting system, to take advantage of omnidirectional images for fast radiance field reconstruction.
arXiv Detail & Related papers (2024-04-04T05:10:26Z) - GS2Mesh: Surface Reconstruction from Gaussian Splatting via Novel Stereo Views [9.175560202201819]
3D Gaussian Splatting (3DGS) has emerged as an efficient approach for accurately representing scenes.
We propose a novel approach for bridging the gap between the noisy 3DGS representation and the smooth 3D mesh representation.
We render stereo-aligned pairs of images corresponding to the original training poses, feed the pairs into a stereo model to get a depth profile, and finally fuse all of the profiles together to get a single mesh.
arXiv Detail & Related papers (2024-04-02T10:13:18Z) - Recent Advances in 3D Gaussian Splatting [31.3820273122585]
3D Gaussian Splatting has greatly accelerated rendering speed of novel view synthesis.
The explicit representation of 3D Gaussian Splatting facilitates editing tasks like dynamic reconstruction, geometry editing, and physical simulation.
We present a literature review of recent 3D Gaussian Splatting methods, which can be roughly classified into 3D reconstruction, 3D editing, and other downstream applications.
arXiv Detail & Related papers (2024-03-17T07:57:08Z) - Identifying Unnecessary 3D Gaussians using Clustering for Fast Rendering of 3D Gaussian Splatting [2.878831747437321]
3D-GS is a new rendering approach that outperforms the neural radiance field (NeRF) in terms of both speed and image quality.
We propose a computational reduction technique that quickly identifies unnecessary 3D Gaussians in real-time for rendering the current view.
For the Mip-NeRF360 dataset, the proposed technique excludes 63% of 3D Gaussians on average before the 2D image projection, which reduces the overall rendering by almost 38.3% without sacrificing peak-signal-to-noise-ratio (PSNR)
The proposed accelerator also achieves a speedup of 10.7x compared to a GPU
arXiv Detail & Related papers (2024-02-21T14:16:49Z) - Splatter Image: Ultra-Fast Single-View 3D Reconstruction [67.96212093828179]
Splatter Image is based on Gaussian Splatting, which allows fast and high-quality reconstruction of 3D scenes from multiple images.
We learn a neural network that, at test time, performs reconstruction in a feed-forward manner, at 38 FPS.
On several synthetic, real, multi-category and large-scale benchmark datasets, we achieve better results in terms of PSNR, LPIPS, and other metrics while training and evaluating much faster than prior works.
arXiv Detail & Related papers (2023-12-20T16:14:58Z) - pixelSplat: 3D Gaussian Splats from Image Pairs for Scalable Generalizable 3D Reconstruction [26.72289913260324]
pixelSplat is a feed-forward model that learns to reconstruct 3D radiance fields parameterized by 3D Gaussian primitives from pairs of images.
Our model features real-time and memory-efficient rendering for scalable training as well as fast 3D reconstruction at inference time.
arXiv Detail & Related papers (2023-12-19T17:03:50Z) - GIR: 3D Gaussian Inverse Rendering for Relightable Scene Factorization [62.13932669494098]
This paper presents a 3D Gaussian Inverse Rendering (GIR) method, employing 3D Gaussian representations to factorize the scene into material properties, light, and geometry.
We compute the normal of each 3D Gaussian using the shortest eigenvector, with a directional masking scheme forcing accurate normal estimation without external supervision.
We adopt an efficient voxel-based indirect illumination tracing scheme that stores direction-aware outgoing radiance in each 3D Gaussian to disentangle secondary illumination for approximating multi-bounce light transport.
arXiv Detail & Related papers (2023-12-08T16:05:15Z) - Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering [71.44349029439944]
Recent 3D Gaussian Splatting method has achieved the state-of-the-art rendering quality and speed.
We introduce Scaffold-GS, which uses anchor points to distribute local 3D Gaussians.
We show that our method effectively reduces redundant Gaussians while delivering high-quality rendering.
arXiv Detail & Related papers (2023-11-30T17:58:57Z) - VoGE: A Differentiable Volume Renderer using Gaussian Ellipsoids for
Analysis-by-Synthesis [62.47221232706105]
We propose VoGE, which utilizes the Gaussian reconstruction kernels as volumetric primitives.
To efficiently render via VoGE, we propose an approximate closeform solution for the volume density aggregation and a coarse-to-fine rendering strategy.
VoGE outperforms SoTA when applied to various vision tasks, e.g., object pose estimation, shape/texture fitting, and reasoning.
arXiv Detail & Related papers (2022-05-30T19:52:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.