MeshSplat: Generalizable Sparse-View Surface Reconstruction via Gaussian Splatting
- URL: http://arxiv.org/abs/2508.17811v1
- Date: Mon, 25 Aug 2025 09:04:20 GMT
- Title: MeshSplat: Generalizable Sparse-View Surface Reconstruction via Gaussian Splatting
- Authors: Hanzhi Chang, Ruijie Zhu, Wenjie Chang, Mulin Yu, Yanzhe Liang, Jiahao Lu, Zhuoyuan Li, Tianzhu Zhang,
- Abstract summary: We propose MeshSplat, a generalizable sparse-view surface reconstruction framework via Gaussian Splatting.<n>Our key idea is to leverage 2DGS as a bridge, which connects novel view synthesis to learned geometric priors.<n>We incorporate a feed-forward network to predict per-view pixel-aligned 2DGS, which enables the network to synthesize novel view images.
- Score: 37.35249331090283
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Surface reconstruction has been widely studied in computer vision and graphics. However, existing surface reconstruction works struggle to recover accurate scene geometry when the input views are extremely sparse. To address this issue, we propose MeshSplat, a generalizable sparse-view surface reconstruction framework via Gaussian Splatting. Our key idea is to leverage 2DGS as a bridge, which connects novel view synthesis to learned geometric priors and then transfers these priors to achieve surface reconstruction. Specifically, we incorporate a feed-forward network to predict per-view pixel-aligned 2DGS, which enables the network to synthesize novel view images and thus eliminates the need for direct 3D ground-truth supervision. To improve the accuracy of 2DGS position and orientation prediction, we propose a Weighted Chamfer Distance Loss to regularize the depth maps, especially in overlapping areas of input views, and also a normal prediction network to align the orientation of 2DGS with normal vectors predicted by a monocular normal estimator. Extensive experiments validate the effectiveness of our proposed improvement, demonstrating that our method achieves state-of-the-art performance in generalizable sparse-view mesh reconstruction tasks. Project Page: https://hanzhichang.github.io/meshsplat_web
Related papers
- G3Splat: Geometrically Consistent Generalizable Gaussian Splatting [30.752029360892504]
We introduce G3Splat, which enforces geometric priors to obtain geometrically consistent 3D scene representations.<n>Trained on RE10K, our approach achieves state-of-the-art performance in (i) geometrically consistent reconstruction, (ii) relative pose estimation, and (iii) novel-view synthesis.
arXiv Detail & Related papers (2025-12-19T13:11:55Z) - G4Splat: Geometry-Guided Gaussian Splatting with Generative Prior [53.762256749551284]
We identify accurate geometry as the fundamental prerequisite for effectively exploiting generative models to enhance 3D scene reconstruction.<n>We incorporate this geometry guidance throughout the generative pipeline to improve visibility mask estimation, guide novel view selection, and enhance multi-view consistency when inpainting with video diffusion models.<n>Our method naturally supports single-view inputs and unposed videos, with strong generalizability in both indoor and outdoor scenarios.
arXiv Detail & Related papers (2025-10-14T03:06:28Z) - Multi-view Normal and Distance Guidance Gaussian Splatting for Surface Reconstruction [2.760653393100493]
3D Gaussian Splatting (3DGS) achieves remarkable results in the field of surface reconstruction.<n>However, when Gaussian normal vectors are aligned within the single-view projection plane, while the geometry appears reasonable in the current view, biases may emerge upon switching to nearby views.<n>We develop a multi-view normal enhancement module, which ensures consistency across views by matching the normals of pixel points in nearby views and calculating the loss.
arXiv Detail & Related papers (2025-08-11T07:25:13Z) - SparSplat: Fast Multi-View Reconstruction with Generalizable 2D Gaussian Splatting [7.9061560322289335]
We propose an MVS-based learning that regresses 2DGS surface parameters in a feed-forward fashion to perform 3D shape reconstruction and NVS from sparse-view images.<n>The resulting pipeline attains the state-of-the-art results on the DTU 3D reconstruction benchmark in terms of Chamfer distance to ground-truth, as-well as state-of-the-art NVS.
arXiv Detail & Related papers (2025-05-04T16:33:47Z) - Sparse2DGS: Geometry-Prioritized Gaussian Splatting for Surface Reconstruction from Sparse Views [45.125032766506536]
We propose Sparse2DGS, an MVS-d Gaussian Splatting pipeline for complete and accurate reconstruction.<n>Our key insight is to incorporate the geometric-prioritized enhancement schemes, allowing for direct and robust geometric learning under ill-posed conditions.<n>Sparse2DGS outperforms existing methods by notable margins while being $2times$ faster than the NeRF-based fine-tuning approach.
arXiv Detail & Related papers (2025-04-29T02:47:02Z) - GausSurf: Geometry-Guided 3D Gaussian Splatting for Surface Reconstruction [79.42244344704154]
GausSurf employs geometry guidance from multi-view consistency in texture-rich areas and normal priors in texture-less areas of a scene.<n>Our method surpasses state-of-the-art methods in terms of reconstruction quality and computation time.
arXiv Detail & Related papers (2024-11-29T03:54:54Z) - AGS-Mesh: Adaptive Gaussian Splatting and Meshing with Geometric Priors for Indoor Room Reconstruction Using Smartphones [19.429461194706786]
We propose an approach for joint surface depth and normal refinement of Gaussian Splatting methods for accurate 3D reconstruction of indoor scenes.<n>Our filtering strategy and optimization design demonstrate significant improvements in both mesh estimation and novel-view synthesis.
arXiv Detail & Related papers (2024-11-28T17:04:32Z) - CityGaussianV2: Efficient and Geometrically Accurate Reconstruction for Large-Scale Scenes [53.107474952492396]
CityGaussianV2 is a novel approach for large-scale scene reconstruction.<n>We implement a decomposed-gradient-based densification and depth regression technique to eliminate blurry artifacts and accelerate convergence.<n>Our method strikes a promising balance between visual quality, geometric accuracy, as well as storage and training costs.
arXiv Detail & Related papers (2024-11-01T17:59:31Z) - GigaGS: Scaling up Planar-Based 3D Gaussians for Large Scene Surface Reconstruction [71.08607897266045]
3D Gaussian Splatting (3DGS) has shown promising performance in novel view synthesis.
We make the first attempt to tackle the challenging task of large-scale scene surface reconstruction.
We propose GigaGS, the first work for high-quality surface reconstruction for large-scale scenes using 3DGS.
arXiv Detail & Related papers (2024-09-10T17:51:39Z) - Deep Active Surface Models [60.027353171412216]
Active Surface Models have a long history of being useful to model complex 3D surfaces but only Active Contours have been used in conjunction with deep networks.
We introduce layers that implement them that can be integrated seamlessly into Graph Convolutional Networks to enforce sophisticated smoothness priors.
arXiv Detail & Related papers (2020-11-17T18:48:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.