Sparse2DGS: Geometry-Prioritized Gaussian Splatting for Surface Reconstruction from Sparse Views
- URL: http://arxiv.org/abs/2504.20378v1
- Date: Tue, 29 Apr 2025 02:47:02 GMT
- Title: Sparse2DGS: Geometry-Prioritized Gaussian Splatting for Surface Reconstruction from Sparse Views
- Authors: Jiang Wu, Rui Li, Yu Zhu, Rong Guo, Jinqiu Sun, Yanning Zhang,
- Abstract summary: We propose Sparse2DGS, an MVS-d Gaussian Splatting pipeline for complete and accurate reconstruction.<n>Our key insight is to incorporate the geometric-prioritized enhancement schemes, allowing for direct and robust geometric learning under ill-posed conditions.<n>Sparse2DGS outperforms existing methods by notable margins while being $2times$ faster than the NeRF-based fine-tuning approach.
- Score: 45.125032766506536
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a Gaussian Splatting method for surface reconstruction using sparse input views. Previous methods relying on dense views struggle with extremely sparse Structure-from-Motion points for initialization. While learning-based Multi-view Stereo (MVS) provides dense 3D points, directly combining it with Gaussian Splatting leads to suboptimal results due to the ill-posed nature of sparse-view geometric optimization. We propose Sparse2DGS, an MVS-initialized Gaussian Splatting pipeline for complete and accurate reconstruction. Our key insight is to incorporate the geometric-prioritized enhancement schemes, allowing for direct and robust geometric learning under ill-posed conditions. Sparse2DGS outperforms existing methods by notable margins while being ${2}\times$ faster than the NeRF-based fine-tuning approach.
Related papers
- FreeSplat++: Generalizable 3D Gaussian Splatting for Efficient Indoor Scene Reconstruction [50.534213038479926]
FreeSplat++ is an alternative approach to large-scale indoor whole-scene reconstruction.<n>Our method with depth-regularized per-scene fine-tuning demonstrates substantial improvements in reconstruction accuracy and a notable reduction in training time.
arXiv Detail & Related papers (2025-03-29T06:22:08Z) - RoGSplat: Learning Robust Generalizable Human Gaussian Splatting from Sparse Multi-View Images [39.03889696169877]
RoGSplat is a novel approach for synthesizing high-fidelity novel views of unseen human from sparse multi-view images.<n>Our method outperforms state-of-the-art methods in novel view synthesis and cross-dataset generalization.
arXiv Detail & Related papers (2025-03-18T12:18:34Z) - SolidGS: Consolidating Gaussian Surfel Splatting for Sparse-View Surface Reconstruction [48.228533595941556]
We propose a novel method called SolidGS to address this problem.
We observed that the reconstructed geometry can be severely inconsistent across multi-views.
With the additional help of geometrical regularization and monocular normal estimation, our method achieves superior performance on the sparse view surface reconstruction.
arXiv Detail & Related papers (2024-12-19T21:04:43Z) - AGS-Mesh: Adaptive Gaussian Splatting and Meshing with Geometric Priors for Indoor Room Reconstruction Using Smartphones [19.429461194706786]
We propose an approach for joint surface depth and normal refinement of Gaussian Splatting methods for accurate 3D reconstruction of indoor scenes.<n>Our filtering strategy and optimization design demonstrate significant improvements in both mesh estimation and novel-view synthesis.
arXiv Detail & Related papers (2024-11-28T17:04:32Z) - Quadratic Gaussian Splatting for Efficient and Detailed Surface Reconstruction [7.500927135156425]
Quadratic Gaussian Splatting (QGS) is a novel method that replaces disks with quadric surfaces.
QGS renders spatial curvature to guide the normal consistency term, to effectively reduce over-smoothing.
Our code willbe released as open source.
arXiv Detail & Related papers (2024-11-25T13:55:00Z) - MGS-SLAM: Monocular Sparse Tracking and Gaussian Mapping with Depth Smooth Regularization [29.713650915551632]
This letter introduces a novel framework for dense Visual Simultaneous Localization and Mapping based on Gaussian Splatting.
We jointly optimize sparse visual odometry tracking and 3D Gaussian Splatting scene representation for the first time.
The accuracy of our pose estimation surpasses existing methods and state-of-the-art.
arXiv Detail & Related papers (2024-05-10T04:42:21Z) - GS2Mesh: Surface Reconstruction from Gaussian Splatting via Novel Stereo Views [9.175560202201819]
3D Gaussian Splatting (3DGS) has emerged as an efficient approach for accurately representing scenes.
We propose a novel approach for bridging the gap between the noisy 3DGS representation and the smooth 3D mesh representation.
We render stereo-aligned pairs of images corresponding to the original training poses, feed the pairs into a stereo model to get a depth profile, and finally fuse all of the profiles together to get a single mesh.
arXiv Detail & Related papers (2024-04-02T10:13:18Z) - InstantSplat: Sparse-view Gaussian Splatting in Seconds [91.77050739918037]
We introduce InstantSplat, a novel approach for addressing sparse-view 3D scene reconstruction at lightning-fast speed.
InstantSplat employs a self-supervised framework that optimize 3D scene representation and camera poses.
It achieves an acceleration of over 30x in reconstruction and improves visual quality (SSIM) from 0.3755 to 0.7624 compared to traditional SfM with 3D-GS.
arXiv Detail & Related papers (2024-03-29T17:29:58Z) - GaussianPro: 3D Gaussian Splatting with Progressive Propagation [49.918797726059545]
3DGS relies heavily on the point cloud produced by Structure-from-Motion (SfM) techniques.
We propose a novel method that applies a progressive propagation strategy to guide the densification of the 3D Gaussians.
Our method significantly surpasses 3DGS on the dataset, exhibiting an improvement of 1.15dB in terms of PSNR.
arXiv Detail & Related papers (2024-02-22T16:00:20Z) - GPS-Gaussian: Generalizable Pixel-wise 3D Gaussian Splatting for Real-time Human Novel View Synthesis [70.24111297192057]
We present a new approach, termed GPS-Gaussian, for synthesizing novel views of a character in a real-time manner.
The proposed method enables 2K-resolution rendering under a sparse-view camera setting.
arXiv Detail & Related papers (2023-12-04T18:59:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.