Improving Geometry in Sparse-View 3DGS via Reprojection-based DoF Separation
- URL: http://arxiv.org/abs/2412.14568v1
- Date: Thu, 19 Dec 2024 06:39:28 GMT
- Title: Improving Geometry in Sparse-View 3DGS via Reprojection-based DoF Separation
- Authors: Yongsung Kim, Minjun Park, Jooyoung Choi, Sungroh Yoon,
- Abstract summary: Recent learning-based Multi-View Stereo models have demonstrated state-of-the-art performance in sparse-view 3D reconstruction.
We propose reprojection-based DoF separation, a method distinguishing positional DoFs in terms of uncertainty.
We show that separating the positional DoFs of Gaussians and applying targeted constraints effectively suppresses geometric artifacts.
- Score: 35.17953057142724
- License:
- Abstract: Recent learning-based Multi-View Stereo models have demonstrated state-of-the-art performance in sparse-view 3D reconstruction. However, directly applying 3D Gaussian Splatting (3DGS) as a refinement step following these models presents challenges. We hypothesize that the excessive positional degrees of freedom (DoFs) in Gaussians induce geometry distortion, fitting color patterns at the cost of structural fidelity. To address this, we propose reprojection-based DoF separation, a method distinguishing positional DoFs in terms of uncertainty: image-plane-parallel DoFs and ray-aligned DoF. To independently manage each DoF, we introduce a reprojection process along with tailored constraints for each DoF. Through experiments across various datasets, we confirm that separating the positional DoFs of Gaussians and applying targeted constraints effectively suppresses geometric artifacts, producing reconstruction results that are both visually and geometrically plausible.
Related papers
- FOF-X: Towards Real-time Detailed Human Reconstruction from a Single Image [68.84221452621674]
We introduce FOF-X for real-time reconstruction of detailed human geometry from a single image.
FOF-X avoids the performance degradation caused by texture and lighting.
We enhance the inter-conversion algorithms between FOF and mesh representations with a Laplacian constraint and an automaton-based discontinuity matcher.
arXiv Detail & Related papers (2024-12-08T14:46:29Z) - GSurf: 3D Reconstruction via Signed Distance Fields with Direct Gaussian Supervision [3.2944910942497985]
Surface reconstruction from multi-view images is a core challenge in 3D vision.
Recent studies have explored signed distance fields (SDF) within Neural Radiance Fields (NeRF) to achieve high-fidelity surface reconstructions.
We introduce GSurf, a novel end-to-end method for learning a signed distance field directly from Gaussian primitives.
GSurf achieves faster training and rendering speeds while delivering 3D reconstruction quality comparable to neural implicit surface methods, such as VolSDF and NeuS.
arXiv Detail & Related papers (2024-11-24T05:55:19Z) - PF3plat: Pose-Free Feed-Forward 3D Gaussian Splatting [54.7468067660037]
PF3plat sets a new state-of-the-art across all benchmarks, supported by comprehensive ablation studies validating our design choices.
Our framework capitalizes on fast speed, scalability, and high-quality 3D reconstruction and view synthesis capabilities of 3DGS.
arXiv Detail & Related papers (2024-10-29T15:28:15Z) - AniSDF: Fused-Granularity Neural Surfaces with Anisotropic Encoding for High-Fidelity 3D Reconstruction [55.69271635843385]
We present AniSDF, a novel approach that learns fused-granularity neural surfaces with physics-based encoding for high-fidelity 3D reconstruction.
Our method boosts the quality of SDF-based methods by a great scale in both geometry reconstruction and novel-view synthesis.
arXiv Detail & Related papers (2024-10-02T03:10:38Z) - VortSDF: 3D Modeling with Centroidal Voronoi Tesselation on Signed Distance Field [5.573454319150408]
We introduce a volumetric optimization framework that combines explicit SDF fields with a shallow color network, in order to estimate 3D shape properties over tetrahedral grids.
Experimental results with Chamfer statistics validate this approach with unprecedented reconstruction quality on various scenarios such as objects, open scenes or human.
arXiv Detail & Related papers (2024-07-29T09:46:39Z) - GS-Octree: Octree-based 3D Gaussian Splatting for Robust Object-level 3D Reconstruction Under Strong Lighting [4.255847344539736]
We introduce a novel approach that combines octree-based implicit surface representations with Gaussian splatting.
Our method, which leverages the distribution of 3D Gaussians with SDFs, reconstructs more accurate geometry, particularly in images with specular highlights caused by strong lighting.
arXiv Detail & Related papers (2024-06-26T09:29:56Z) - RaNeuS: Ray-adaptive Neural Surface Reconstruction [87.20343320266215]
We leverage a differentiable radiance field eg NeRF to reconstruct detailed 3D surfaces in addition to producing novel view renderings.
Considering that different methods formulate and optimize the projection from SDF to radiance field with a globally constant Eikonal regularization, we improve with a ray-wise weighting factor.
Our proposed textitRaNeuS are extensively evaluated on both synthetic and real datasets.
arXiv Detail & Related papers (2024-06-14T07:54:25Z) - FrozenRecon: Pose-free 3D Scene Reconstruction with Frozen Depth Models [67.96827539201071]
We propose a novel test-time optimization approach for 3D scene reconstruction.
Our method achieves state-of-the-art cross-dataset reconstruction on five zero-shot testing datasets.
arXiv Detail & Related papers (2023-08-10T17:55:02Z) - Gradient-SDF: A Semi-Implicit Surface Representation for 3D
Reconstruction [53.315347543761426]
Gradient-SDF is a novel representation for 3D geometry that combines the advantages of implict and explicit representations.
By storing at every voxel both the signed distance field as well as its gradient vector field, we enhance the capability of implicit representations.
We show that (1) the Gradient-SDF allows us to perform direct SDF tracking from depth images, using efficient storage schemes like hash maps, and that (2) the Gradient-SDF representation enables us to perform photometric bundle adjustment directly in a voxel representation.
arXiv Detail & Related papers (2021-11-26T18:33:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.