Depth-Regularized Optimization for 3D Gaussian Splatting in Few-Shot
Images
- URL: http://arxiv.org/abs/2311.13398v3
- Date: Thu, 4 Jan 2024 08:19:16 GMT
- Title: Depth-Regularized Optimization for 3D Gaussian Splatting in Few-Shot
Images
- Authors: Jaeyoung Chung, Jeongtaek Oh, and Kyoung Mu Lee
- Abstract summary: We introduce a dense depth map as a geometry guide to mitigate overfitting.
The adjusted depth aids in the color-based optimization of 3D Gaussian splatting.
We verify the proposed method on the NeRF-LLFF dataset with varying numbers of few images.
- Score: 47.14713579719103
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we present a method to optimize Gaussian splatting with a
limited number of images while avoiding overfitting. Representing a 3D scene by
combining numerous Gaussian splats has yielded outstanding visual quality.
However, it tends to overfit the training views when only a small number of
images are available. To address this issue, we introduce a dense depth map as
a geometry guide to mitigate overfitting. We obtained the depth map using a
pre-trained monocular depth estimation model and aligning the scale and offset
using sparse COLMAP feature points. The adjusted depth aids in the color-based
optimization of 3D Gaussian splatting, mitigating floating artifacts, and
ensuring adherence to geometric constraints. We verify the proposed method on
the NeRF-LLFF dataset with varying numbers of few images. Our approach
demonstrates robust geometry compared to the original method that relies solely
on images. Project page: robot0321.github.io/DepthRegGS
Related papers
- Splat-SLAM: Globally Optimized RGB-only SLAM with 3D Gaussians [87.48403838439391]
3D Splatting has emerged as a powerful representation of geometry and appearance for RGB-only dense Simultaneous SLAM.
We propose the first RGB-only SLAM system with a dense 3D Gaussian map representation.
Our experiments on the Replica, TUM-RGBD, and ScanNet datasets indicate the effectiveness of globally optimized 3D Gaussians.
arXiv Detail & Related papers (2024-05-26T12:26:54Z) - InFusion: Inpainting 3D Gaussians via Learning Depth Completion from Diffusion Prior [36.23604779569843]
3D Gaussians have recently emerged as an efficient representation for novel view synthesis.
This work studies its editability with a particular focus on the inpainting task.
Compared to 2D inpainting, the crux of inpainting 3D Gaussians is to figure out the rendering-relevant properties of the introduced points.
arXiv Detail & Related papers (2024-04-17T17:59:53Z) - AbsGS: Recovering Fine Details for 3D Gaussian Splatting [10.458776364195796]
3D Gaussian Splatting (3D-GS) technique couples 3D primitives with differentiable Gaussianization to achieve high-quality novel view results.
However, 3D-GS frequently suffers from over-reconstruction issue in intricate scenes containing high-frequency details, leading to blurry rendered images.
We present a comprehensive analysis of the cause of aforementioned artifacts, namely gradient collision.
Our strategy efficiently identifies large Gaussians in over-reconstructed regions, and recovers fine details by splitting.
arXiv Detail & Related papers (2024-04-16T11:44:12Z) - Compact 3D Gaussian Splatting For Dense Visual SLAM [26.47738770606461]
We propose a compact 3D Gaussian Splatting SLAM system that reduces the number and the parameter size of Gaussian ellipsoids.
A sliding window-based masking strategy is first proposed to reduce the redundant ellipsoids.
Our method achieves faster training and rendering speed while maintaining the state-of-the-art (SOTA) quality of the scene representation.
arXiv Detail & Related papers (2024-03-17T15:41:35Z) - FrozenRecon: Pose-free 3D Scene Reconstruction with Frozen Depth Models [67.96827539201071]
We propose a novel test-time optimization approach for 3D scene reconstruction.
Our method achieves state-of-the-art cross-dataset reconstruction on five zero-shot testing datasets.
arXiv Detail & Related papers (2023-08-10T17:55:02Z) - NEF: Neural Edge Fields for 3D Parametric Curve Reconstruction from
Multi-view Images [18.303674194874457]
We study the problem of reconstructing 3D feature curves of an object from a set of calibrated multi-view images.
We learn a neural implicit field representing the density distribution of 3D edges which we refer to as Neural Edge Field (NEF)
NEF is optimized with a view-based rendering loss where a 2D edge map is rendered at a given view and is compared to the ground-truth edge map extracted from the image of that view.
arXiv Detail & Related papers (2023-03-14T06:45:13Z) - Neural 3D Scene Reconstruction with the Manhattan-world Assumption [58.90559966227361]
This paper addresses the challenge of reconstructing 3D indoor scenes from multi-view images.
Planar constraints can be conveniently integrated into the recent implicit neural representation-based reconstruction methods.
The proposed method outperforms previous methods by a large margin on 3D reconstruction quality.
arXiv Detail & Related papers (2022-05-05T17:59:55Z) - Depth Completion using Piecewise Planar Model [94.0808155168311]
A depth map can be represented by a set of learned bases and can be efficiently solved in a closed form solution.
However, one issue with this method is that it may create artifacts when colour boundaries are inconsistent with depth boundaries.
We enforce a more strict model in depth recovery: a piece-wise planar model.
arXiv Detail & Related papers (2020-12-06T07:11:46Z) - Geometric Correspondence Fields: Learned Differentiable Rendering for 3D
Pose Refinement in the Wild [96.09941542587865]
We present a novel 3D pose refinement approach based on differentiable rendering for objects of arbitrary categories in the wild.
In this way, we precisely align 3D models to objects in RGB images which results in significantly improved 3D pose estimates.
We evaluate our approach on the challenging Pix3D dataset and achieve up to 55% relative improvement compared to state-of-the-art refinement methods in multiple metrics.
arXiv Detail & Related papers (2020-07-17T12:34:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.