GVGS: Gaussian Visibility-Aware Multi-View Geometry for Accurate Surface Reconstruction
- URL: http://arxiv.org/abs/2601.20331v1
- Date: Wed, 28 Jan 2026 07:48:51 GMT
- Title: GVGS: Gaussian Visibility-Aware Multi-View Geometry for Accurate Surface Reconstruction
- Authors: Mai Su, Qihan Yu, Zhongtao Wang, Yilong Li, Chengwei Pan, Yisong Chen, Guoping Wang,
- Abstract summary: 3D Gaussian Splatting enables efficient optimization and high-quality rendering, yet accurate surface reconstruction remains challenging.<n>We introduce a visibility-aware multi-view geometric consistency constraint that aggregates the visibility of shared Gaussian primitives across views.<n>We also propose a progressive quadtree-calibrated Monocular depth constraint that performs block-wise affine calibration from coarse to fine spatial scales.
- Score: 15.170414649311441
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D Gaussian Splatting enables efficient optimization and high-quality rendering, yet accurate surface reconstruction remains challenging. Prior methods improve surface reconstruction by refining Gaussian depth estimates, either via multi-view geometric consistency or through monocular depth priors. However, multi-view constraints become unreliable under large geometric discrepancies, while monocular priors suffer from scale ambiguity and local inconsistency, ultimately leading to inaccurate Gaussian depth supervision. To address these limitations, we introduce a Gaussian visibility-aware multi-view geometric consistency constraint that aggregates the visibility of shared Gaussian primitives across views, enabling more accurate and stable geometric supervision. In addition, we propose a progressive quadtree-calibrated Monocular depth constraint that performs block-wise affine calibration from coarse to fine spatial scales, mitigating the scale ambiguity of depth priors while preserving fine-grained surface details. Extensive experiments on DTU and TNT datasets demonstrate consistent improvements in geometric accuracy over prior Gaussian-based and implicit surface reconstruction methods. Codes are available at an anonymous repository: https://github.com/GVGScode/GVGS.
Related papers
- MDE-VIO: Enhancing Visual-Inertial Odometry Using Learned Depth Priors [8.2208199207543]
We propose a novel framework that enforces affine-invariant depth consistency and pairwise ordinal constraints.<n>This approach strictly adheres to the computational limits of edge devices while robustly recovering metric scale.
arXiv Detail & Related papers (2026-02-11T19:53:06Z) - SparseSurf: Sparse-View 3D Gaussian Splatting for Surface Reconstruction [26.59203606048875]
We propose net, a method that reconstructs more accurate and detailed surfaces while preserving high-quality novel view rendering.<n>Our key insight is to introduce Stereo Geometry-Texture Alignment, which bridges rendering quality and geometry estimation.<n>In addition, we present a Pseudo-Feature Enhanced Geometry Consistency that enforces multi-view geometric consistency.
arXiv Detail & Related papers (2025-11-18T16:24:37Z) - VA-GS: Enhancing the Geometric Representation of Gaussian Splatting via View Alignment [48.147381011235446]
3D Gaussian Splatting has recently emerged as an efficient solution for real-time novel view synthesis.<n>We propose a novel method that enhances the geometric representation of 3D Gaussians through view alignment.<n>Our method achieves state-of-the-art performance in both surface reconstruction and novel view synthesis.
arXiv Detail & Related papers (2025-10-13T14:44:50Z) - D$^2$GS: Depth-and-Density Guided Gaussian Splatting for Stable and Accurate Sparse-View Reconstruction [73.61056394880733]
3D Gaussian Splatting (3DGS) enables real-time, high-fidelity novel view synthesis (NVS) with explicit 3D representations.<n>We identify two key failure modes under sparse-view conditions: overfitting in regions with excessive Gaussian density near the camera, and underfitting in distant areas with insufficient Gaussian coverage.<n>We propose a unified framework D$2$GS, comprising two key components: a Depth-and-Density Guided Dropout strategy, and a Distance-Aware Fidelity Enhancement module.
arXiv Detail & Related papers (2025-10-09T17:59:49Z) - DET-GS: Depth- and Edge-Aware Regularization for High-Fidelity 3D Gaussian Splatting [5.759434800012218]
3D Gaussian Splatting (3DGS) represents a significant advancement in the field of efficient and high-fidelity novel view synthesis.<n>Existing methods often rely on non-local depth regularization, which fails to capture fine-grained structures.<n>We propose DET-GS, a unified depth and edge-aware regularization framework for 3D Gaussian Splatting.
arXiv Detail & Related papers (2025-08-06T05:37:26Z) - Multiview Geometric Regularization of Gaussian Splatting for Accurate Radiance Fields [8.41704235466682]
We propose an effective multiview geometric regularization strategy that integrates multiview stereo (MVS) depth, RGB, and normal constraints into Gaussian Splatting.<n>Our key insight is the complementary relationship between MVS-derived depth points and Gaussian Splatting-optimized positions.<n>To leverage this insight, we introduce a median depth-based multiview relative depth loss with uncertainty estimation.
arXiv Detail & Related papers (2025-06-16T14:02:46Z) - RDG-GS: Relative Depth Guidance with Gaussian Splatting for Real-time Sparse-View 3D Rendering [13.684624443214599]
We present RDG-GS, a novel sparse-view 3D rendering framework with Relative Depth Guidance based on 3D Gaussian Splatting.<n>The core innovation lies in utilizing relative depth guidance to refine the Gaussian field, steering it towards view-consistent spatial geometric representations.<n>Across extensive experiments on Mip-NeRF360, LLFF, DTU, and Blender, RDG-GS demonstrates state-of-the-art rendering quality and efficiency.
arXiv Detail & Related papers (2025-01-19T16:22:28Z) - SolidGS: Consolidating Gaussian Surfel Splatting for Sparse-View Surface Reconstruction [48.228533595941556]
We propose a novel method called SolidGS to address this problem.<n>We observed that the reconstructed geometry can be severely inconsistent across multi-views.<n>With the additional help of geometrical regularization and monocular normal estimation, our method achieves superior performance on the sparse view surface reconstruction.
arXiv Detail & Related papers (2024-12-19T21:04:43Z) - MonoGSDF: Exploring Monocular Geometric Cues for Gaussian Splatting-Guided Implicit Surface Reconstruction [86.87464903285208]
We introduce MonoGSDF, a novel method that couples primitives with a neural Signed Distance Field (SDF) for high-quality reconstruction.<n>To handle arbitrary-scale scenes, we propose a scaling strategy for robust generalization.<n>Experiments on real-world datasets outperforms prior methods while maintaining efficiency.
arXiv Detail & Related papers (2024-11-25T20:07:07Z) - VCR-GauS: View Consistent Depth-Normal Regularizer for Gaussian Surface Reconstruction [47.603017811399624]
We propose a Depth-Normal regularizer that directly couples normal with other geometric parameters, leading to full updates of the geometric parameters from normal regularization.
We also introduce a densification and splitting strategy to regularize the size and distribution of 3D Gaussians for more accurate surface modeling.
arXiv Detail & Related papers (2024-06-09T13:15:43Z) - Gaussian Opacity Fields: Efficient Adaptive Surface Reconstruction in Unbounded Scenes [50.92217884840301]
Gaussian Opacity Fields (GOF) is a novel approach for efficient, high-quality, and adaptive surface reconstruction in scenes.
GOF is derived from ray-tracing-based volume rendering of 3D Gaussians.
GOF surpasses existing 3DGS-based methods in surface reconstruction and novel view synthesis.
arXiv Detail & Related papers (2024-04-16T17:57:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.