Quadratic Gaussian Splatting for Efficient and Detailed Surface Reconstruction
- URL: http://arxiv.org/abs/2411.16392v1
- Date: Mon, 25 Nov 2024 13:55:00 GMT
- Title: Quadratic Gaussian Splatting for Efficient and Detailed Surface Reconstruction
- Authors: Ziyu Zhang, Binbin Huang, Hanqing Jiang, Liyang Zhou, Xiaojun Xiang, Shunhan Shen,
- Abstract summary: Quadratic Gaussian Splatting (QGS) is a novel method that replaces disks with quadric surfaces.
QGS renders spatial curvature to guide the normal consistency term, to effectively reduce over-smoothing.
Our code willbe released as open source.
- Score: 7.500927135156425
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, 3D Gaussian Splatting (3DGS) has attracted attention for its superior rendering quality and speed over Neural Radiance Fields (NeRF). To address 3DGS's limitations in surface representation, 2D Gaussian Splatting (2DGS) introduced disks as scene primitives to model and reconstruct geometries from multi-view images, offering view-consistent geometry. However, the disk's first-order linear approximation often leads to over-smoothed results. We propose Quadratic Gaussian Splatting (QGS), a novel method that replaces disks with quadric surfaces, enhancing geometric fitting, whose code will be open-sourced. QGS defines Gaussian distributions in non-Euclidean space, allowing primitives to capture more complex textures. As a second-order surface approximation, QGS also renders spatial curvature to guide the normal consistency term, to effectively reduce over-smoothing. Moreover, QGS is a generalized version of 2DGS that achieves more accurate and detailed reconstructions, as verified by experiments on DTU and TNT, demonstrating its effectiveness in surpassing current state-of-the-art methods in geometry reconstruction. Our code willbe released as open source.
Related papers
- MILo: Mesh-In-the-Loop Gaussian Splatting for Detailed and Efficient Surface Reconstruction [28.452920446301608]
We present MILo, a novel framework that bridges the gap between volumetric and surface representations by differentiably extracting a mesh from the 3D Gaussians.<n>Our approach can reconstruct complete scenes, including backgrounds, with state-of-the-art quality while requiring an order of magnitude fewer mesh vertices than previous methods.
arXiv Detail & Related papers (2025-06-30T17:48:54Z) - Sparse2DGS: Geometry-Prioritized Gaussian Splatting for Surface Reconstruction from Sparse Views [45.125032766506536]
We propose Sparse2DGS, an MVS-d Gaussian Splatting pipeline for complete and accurate reconstruction.
Our key insight is to incorporate the geometric-prioritized enhancement schemes, allowing for direct and robust geometric learning under ill-posed conditions.
Sparse2DGS outperforms existing methods by notable margins while being $2times$ faster than the NeRF-based fine-tuning approach.
arXiv Detail & Related papers (2025-04-29T02:47:02Z) - Thin-Shell-SfT: Fine-Grained Monocular Non-rigid 3D Surface Tracking with Neural Deformation Fields [66.1612475655465]
3D reconstruction of deformable surfaces from RGB videos is a challenging problem.<n>Existing methods use deformation models with statistical, neural, or physical priors.<n>We propose ThinShell-SfT, a new method for non-rigid 3D tracking meshes.
arXiv Detail & Related papers (2025-03-25T18:00:46Z) - StableGS: A Floater-Free Framework for 3D Gaussian Splatting [9.935869165752283]
3D Gaussian Splatting (3DGS) reconstructions are plagued by stubborn floater" artifacts that degrade their geometric and visual fidelity.<n>We propose StableGS, a novel framework that decouples geometric regularization from final appearance rendering.<n> Experiments on multiple benchmarks show StableGS not only eliminates floaters but also resolves the common blur-artifact trade-off.
arXiv Detail & Related papers (2025-03-24T09:02:51Z) - FeatureGS: Eigenvalue-Feature Optimization in 3D Gaussian Splatting for Geometrically Accurate and Artifact-Reduced Reconstruction [1.474723404975345]
3D Gaussian Splatting (3DGS) has emerged as a powerful approach for 3D scene reconstruction using 3D Gaussians.<n>We present FeatureGS, which incorporates an additional geometric loss term based on an eigenvalue-derived 3D shape feature into the optimization process of 3DGS.
arXiv Detail & Related papers (2025-01-29T13:40:25Z) - G2SDF: Surface Reconstruction from Explicit Gaussians with Implicit SDFs [84.07233691641193]
We introduce G2SDF, a novel approach that integrates a neural implicit Signed Distance Field into the Gaussian Splatting framework.
G2SDF achieves superior quality than prior works while maintaining the efficiency of 3DGS.
arXiv Detail & Related papers (2024-11-25T20:07:07Z) - DeSiRe-GS: 4D Street Gaussians for Static-Dynamic Decomposition and Surface Reconstruction for Urban Driving Scenes [71.61083731844282]
We present DeSiRe-GS, a self-supervised gaussian splatting representation.<n>It enables effective static-dynamic decomposition and high-fidelity surface reconstruction in complex driving scenarios.
arXiv Detail & Related papers (2024-11-18T05:49:16Z) - CityGaussianV2: Efficient and Geometrically Accurate Reconstruction for Large-Scale Scenes [53.107474952492396]
CityGaussianV2 is a novel approach for large-scale scene reconstruction.
We implement a decomposed-gradient-based densification and depth regression technique to eliminate blurry artifacts and accelerate convergence.
Our method strikes a promising balance between visual quality, geometric accuracy, as well as storage and training costs.
arXiv Detail & Related papers (2024-11-01T17:59:31Z) - GeoSplatting: Towards Geometry Guided Gaussian Splatting for Physically-based Inverse Rendering [69.67264955234494]
GeoSplatting is a novel hybrid representation that augments 3DGS with explicit geometric guidance and differentiable PBR equations.
Comprehensive evaluations across diverse datasets demonstrate the superiority of GeoSplatting.
arXiv Detail & Related papers (2024-10-31T17:57:07Z) - RaDe-GS: Rasterizing Depth in Gaussian Splatting [32.38730602146176]
Gaussian Splatting (GS) has proven to be highly effective in novel view synthesis, achieving high-quality and real-time rendering.
Our work introduces a Chamfer distance error comparable to NeuraLangelo on the DTU dataset and maintains similar computational efficiency as the original 3D GS methods.
arXiv Detail & Related papers (2024-06-03T15:56:58Z) - R$^2$-Gaussian: Rectifying Radiative Gaussian Splatting for Tomographic Reconstruction [53.19869886963333]
3D Gaussian splatting (3DGS) has shown promising results in rendering image and surface reconstruction.
This paper introduces R2$-Gaussian, the first 3DGS-based framework for sparse-view tomographic reconstruction.
arXiv Detail & Related papers (2024-05-31T08:39:02Z) - GaussianRoom: Improving 3D Gaussian Splatting with SDF Guidance and Monocular Cues for Indoor Scene Reconstruction [5.112375652774415]
We propose a unified optimization framework that integrates neural signed distance fields (SDFs) with 3DGS for accurate geometry reconstruction and real-time rendering.
Our method achieves state-of-the-art performance in both surface reconstruction and novel view synthesis.
arXiv Detail & Related papers (2024-05-30T03:46:59Z) - Gaussian Opacity Fields: Efficient Adaptive Surface Reconstruction in Unbounded Scenes [50.92217884840301]
Gaussian Opacity Fields (GOF) is a novel approach for efficient, high-quality, and adaptive surface reconstruction in scenes.
GOF is derived from ray-tracing-based volume rendering of 3D Gaussians.
GOF surpasses existing 3DGS-based methods in surface reconstruction and novel view synthesis.
arXiv Detail & Related papers (2024-04-16T17:57:19Z) - 2D Gaussian Splatting for Geometrically Accurate Radiance Fields [50.056790168812114]
3D Gaussian Splatting (3DGS) has recently revolutionized radiance field reconstruction, achieving high quality novel view synthesis and fast rendering speed without baking.
We present 2D Gaussian Splatting (2DGS), a novel approach to model and reconstruct geometrically accurate radiance fields from multi-view images.
We demonstrate that our differentiable terms allows for noise-free and detailed geometry reconstruction while maintaining competitive appearance quality, fast training speed, and real-time rendering.
arXiv Detail & Related papers (2024-03-26T17:21:24Z) - Binary Opacity Grids: Capturing Fine Geometric Detail for Mesh-Based
View Synthesis [70.40950409274312]
We modify density fields to encourage them to converge towards surfaces, without compromising their ability to reconstruct thin structures.
We also develop a fusion-based meshing strategy followed by mesh simplification and appearance model fitting.
The compact meshes produced by our model can be rendered in real-time on mobile devices.
arXiv Detail & Related papers (2024-02-19T18:59:41Z) - HR-NeuS: Recovering High-Frequency Surface Geometry via Neural Implicit
Surfaces [6.382138631957651]
We present High-Resolution NeuS, a novel neural implicit surface reconstruction method.
HR-NeuS recovers high-frequency surface geometry while maintaining large-scale reconstruction accuracy.
We demonstrate through experiments on DTU and BlendedMVS datasets that our approach produces 3D geometries that are qualitatively more detailed and quantitatively of similar accuracy compared to previous approaches.
arXiv Detail & Related papers (2023-02-14T02:25:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.