Self-Evolving Depth-Supervised 3D Gaussian Splatting from Rendered Stereo Pairs
- URL: http://arxiv.org/abs/2409.07456v1
- Date: Wed, 11 Sep 2024 17:59:58 GMT
- Title: Self-Evolving Depth-Supervised 3D Gaussian Splatting from Rendered Stereo Pairs
- Authors: Sadra Safadoust, Fabio Tosi, Fatma Güney, Matteo Poggi,
- Abstract summary: 3D Gaussian Splatting (GS) significantly struggles to accurately represent the underlying 3D scene geometry.
We address this limitation, undertaking a comprehensive analysis of the integration of depth priors throughout the optimization process.
This latter dynamically exploits depth cues from a readily available stereo network, processing virtual stereo pairs rendered by the GS model itself during training and achieving consistent self-improvement.
- Score: 27.364205809607302
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D Gaussian Splatting (GS) significantly struggles to accurately represent the underlying 3D scene geometry, resulting in inaccuracies and floating artifacts when rendering depth maps. In this paper, we address this limitation, undertaking a comprehensive analysis of the integration of depth priors throughout the optimization process of Gaussian primitives, and present a novel strategy for this purpose. This latter dynamically exploits depth cues from a readily available stereo network, processing virtual stereo pairs rendered by the GS model itself during training and achieving consistent self-improvement of the scene representation. Experimental results on three popular datasets, breaking ground as the first to assess depth accuracy for these models, validate our findings.
Related papers
- SelfSplat: Pose-Free and 3D Prior-Free Generalizable 3D Gaussian Splatting [4.121797302827049]
We propose SelfSplat, a novel 3D Gaussian Splatting model designed to perform pose-free and 3D prior-free generalizable 3D reconstruction from unposed multi-view images.
Our model addresses these challenges by effectively integrating explicit 3D representations with self-supervised depth and pose estimation techniques.
To present the performance of our method, we evaluated it on large-scale real-world datasets, including RealEstate10K, ACID, and DL3DV.
arXiv Detail & Related papers (2024-11-26T08:01:50Z) - Beyond Gaussians: Fast and High-Fidelity 3D Splatting with Linear Kernels [51.08794269211701]
We introduce 3D Linear Splatting (3DLS), which replaces Gaussian kernels with linear kernels to achieve sharper and more precise results.
3DLS demonstrates state-of-the-art fidelity and accuracy, along with a 30% FPS improvement over baseline 3DGS.
arXiv Detail & Related papers (2024-11-19T11:59:54Z) - GPS-Gaussian+: Generalizable Pixel-wise 3D Gaussian Splatting for Real-Time Human-Scene Rendering from Sparse Views [67.34073368933814]
We propose a generalizable Gaussian Splatting approach for high-resolution image rendering under a sparse-view camera setting.
We train our Gaussian parameter regression module on human-only data or human-scene data, jointly with a depth estimation module to lift 2D parameter maps to 3D space.
Experiments on several datasets demonstrate that our method outperforms state-of-the-art methods while achieving an exceeding rendering speed.
arXiv Detail & Related papers (2024-11-18T08:18:44Z) - PF3plat: Pose-Free Feed-Forward 3D Gaussian Splatting [54.7468067660037]
PF3plat sets a new state-of-the-art across all benchmarks, supported by comprehensive ablation studies validating our design choices.
Our framework capitalizes on fast speed, scalability, and high-quality 3D reconstruction and view synthesis capabilities of 3DGS.
arXiv Detail & Related papers (2024-10-29T15:28:15Z) - DepthSplat: Connecting Gaussian Splatting and Depth [90.06180236292866]
We present DepthSplat to connect Gaussian splatting and depth estimation.
We first contribute a robust multi-view depth model by leveraging pre-trained monocular depth features.
We also show that Gaussian splatting can serve as an unsupervised pre-training objective.
arXiv Detail & Related papers (2024-10-17T17:59:58Z) - Mode-GS: Monocular Depth Guided Anchored 3D Gaussian Splatting for Robust Ground-View Scene Rendering [47.879695094904015]
We present a novelview rendering algorithm, Mode-GS, for ground-robot trajectory datasets.
Our approach is based on using anchored Gaussian splats, which are designed to overcome the limitations of existing 3D Gaussian splatting algorithms.
Our method results in improved rendering performance, based on PSNR, SSIM, and LPIPS metrics, in ground scenes with free trajectory patterns.
arXiv Detail & Related papers (2024-10-06T23:01:57Z) - RetinaGS: Scalable Training for Dense Scene Rendering with Billion-Scale 3D Gaussians [12.461531097629857]
We design a general model parallel training method for 3DGS, named RetinaGS, which uses a proper rendering equation.
We observe a clear positive trend of increasing visual quality when increasing primitive numbers with our method.
We also demonstrate the first attempt at training a 3DGS model with more than one billion primitives on the full MatrixCity dataset.
arXiv Detail & Related papers (2024-06-17T17:59:56Z) - Uncertainty-guided Optimal Transport in Depth Supervised Sparse-View 3D Gaussian [49.21866794516328]
3D Gaussian splatting has demonstrated impressive performance in real-time novel view synthesis.
Previous approaches have incorporated depth supervision into the training of 3D Gaussians to mitigate overfitting.
We introduce a novel method to supervise the depth distribution of 3D Gaussians, utilizing depth priors with integrated uncertainty estimates.
arXiv Detail & Related papers (2024-05-30T03:18:30Z) - InFusion: Inpainting 3D Gaussians via Learning Depth Completion from Diffusion Prior [36.23604779569843]
3D Gaussians have recently emerged as an efficient representation for novel view synthesis.
This work studies its editability with a particular focus on the inpainting task.
Compared to 2D inpainting, the crux of inpainting 3D Gaussians is to figure out the rendering-relevant properties of the introduced points.
arXiv Detail & Related papers (2024-04-17T17:59:53Z) - GS2Mesh: Surface Reconstruction from Gaussian Splatting via Novel Stereo Views [9.175560202201819]
3D Gaussian Splatting (3DGS) has emerged as an efficient approach for accurately representing scenes.
We propose a novel approach for bridging the gap between the noisy 3DGS representation and the smooth 3D mesh representation.
We render stereo-aligned pairs of images corresponding to the original training poses, feed the pairs into a stereo model to get a depth profile, and finally fuse all of the profiles together to get a single mesh.
arXiv Detail & Related papers (2024-04-02T10:13:18Z) - Q-SLAM: Quadric Representations for Monocular SLAM [85.82697759049388]
We reimagine volumetric representations through the lens of quadrics.
We use quadric assumption to rectify noisy depth estimations from RGB inputs.
We introduce a novel quadric-decomposed transformer to aggregate information across quadrics.
arXiv Detail & Related papers (2024-03-12T23:27:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.