Metropolis-Hastings Sampling for 3D Gaussian Reconstruction
- URL: http://arxiv.org/abs/2506.12945v2
- Date: Fri, 24 Oct 2025 17:23:51 GMT
- Title: Metropolis-Hastings Sampling for 3D Gaussian Reconstruction
- Authors: Hyunjin Kim, Haebeom Jung, Jaesik Park,
- Abstract summary: We propose an adaptive sampling framework for 3D Gaussian Splatting (3DGS)<n>Our framework overcomes limitations by reformulating densification and pruning as a probabilistic sampling process.<n>Our approach achieves faster convergence while matching or modestly surpassing the view-synthesis quality of state-of-the-art models.
- Score: 31.840492077537018
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose an adaptive sampling framework for 3D Gaussian Splatting (3DGS) that leverages comprehensive multi-view photometric error signals within a unified Metropolis-Hastings approach. Vanilla 3DGS heavily relies on heuristic-based density-control mechanisms (e.g., cloning, splitting, and pruning), which can lead to redundant computations or premature removal of beneficial Gaussians. Our framework overcomes these limitations by reformulating densification and pruning as a probabilistic sampling process, dynamically inserting and relocating Gaussians based on aggregated multi-view errors and opacity scores. Guided by Bayesian acceptance tests derived from these error-based importance scores, our method substantially reduces reliance on heuristics, offers greater flexibility, and adaptively infers Gaussian distributions without requiring predefined scene complexity. Experiments on benchmark datasets, including Mip-NeRF360, Tanks and Temples and Deep Blending, show that our approach reduces the number of Gaussians needed, achieving faster convergence while matching or modestly surpassing the view-synthesis quality of state-of-the-art models.
Related papers
- Multimodal-Prior-Guided Importance Sampling for Hierarchical Gaussian Splatting in Sparse-View Novel View Synthesis [29.048045656420538]
We present multimodal-prior-guided importance sampling as the central mechanism for hierarchical 3D Gaussian Splatting (3DGS) in sparse-view novel view synthesis.<n>Our framework comprises (1) a coarse-to-fine Gaussian representation that encodes global shape with a stable coarse layer and selectively adds fine primitives where the multimodal metric indicates recoverable detail.<n> Experiments on diverse sparse-view benchmarks demonstrate state-of-the-art reconstructions, with up to +0.3 dB PSNR on DTU.
arXiv Detail & Related papers (2026-03-03T11:19:45Z) - GaussianPOP: Principled Simplification Framework for Compact 3D Gaussian Splatting via Error Quantification [5.355982355439107]
GaussianPOP is a principled simplification framework based on analytical Gaussian error quantification.<n>By introducing a highly efficient algorithm, our framework enables practical error calculation in a single forward pass.<n>We show that our method consistently outperforms existing state-of-the-art pruning methods across both application scenarios.
arXiv Detail & Related papers (2026-02-06T16:17:41Z) - UGOD: Uncertainty-Guided Differentiable Opacity and Soft Dropout for Enhanced Sparse-View 3DGS [8.78995910690481]
3D Gaussian Splatting (3DGS) has become a competitive approach for novel view synthesis (NVS)<n>We investigate how adaptive weighting of Gaussians affects rendering quality, which is characterised by learned uncertainties proposed.<n>Our method achieves 3.27% PSNR improvements on the MipNeRF 360 dataset.
arXiv Detail & Related papers (2025-08-07T01:42:22Z) - Shortening the Trajectories: Identity-Aware Gaussian Approximation for Efficient 3D Molecular Generation [2.631060597686179]
Probabilistic Generative Models (GPGMs) generate data by reversing a process that corrupts samples with Gaussian noise.<n>These models have achieved state-of-the-art performance across diverse domains, but their practical deployment remains constrained by the high computational cost.<n>We introduce a theoretically grounded and empirically validated framework that improves generation efficiency without sacrificing training granularity or inference fidelity.
arXiv Detail & Related papers (2025-07-11T21:39:32Z) - Steepest Descent Density Control for Compact 3D Gaussian Splatting [72.54055499344052]
3D Gaussian Splatting (3DGS) has emerged as a powerful real-time, high-resolution novel view.<n>We propose a theoretical framework that demystifies and improves density control in 3DGS.<n>We introduce SteepGS, incorporating steepest density control, a principled strategy that minimizes loss while maintaining a compact point cloud.
arXiv Detail & Related papers (2025-05-08T18:41:38Z) - Micro-splatting: Maximizing Isotropic Constraints for Refined Optimization in 3D Gaussian Splatting [0.3749861135832072]
This work implements an adaptive densification strategy that dynamically refines regions with high image gradients.<n>It results in a denser and more detailed gaussian means where needed, without sacrificing rendering efficiency.
arXiv Detail & Related papers (2025-04-08T07:15:58Z) - ProtoGS: Efficient and High-Quality Rendering with 3D Gaussian Prototypes [81.48624894781257]
3D Gaussian Splatting (3DGS) has made significant strides in novel view synthesis but is limited by the substantial number of Gaussian primitives required.<n>Recent methods address this issue by compressing the storage size of densified Gaussians, yet fail to preserve rendering quality and efficiency.<n>We propose ProtoGS to learn Gaussian prototypes to represent Gaussian primitives, significantly reducing the total Gaussian amount without sacrificing visual quality.
arXiv Detail & Related papers (2025-03-21T18:55:14Z) - GP-GS: Gaussian Processes for Enhanced Gaussian Splatting [15.263608848427136]
This paper proposes a novel 3D reconstruction framework, Gaussian Processes enhanced Gaussian Splatting (GP-GS)<n>GP-GS enables adaptive and uncertainty-guided densification of sparse Structure-from-Motion point clouds.<n>Experiments conducted on synthetic and real-world datasets validate the effectiveness and practicality of the proposed framework.
arXiv Detail & Related papers (2025-02-04T12:50:16Z) - PF3plat: Pose-Free Feed-Forward 3D Gaussian Splatting [54.7468067660037]
PF3plat sets a new state-of-the-art across all benchmarks, supported by comprehensive ablation studies validating our design choices.
Our framework capitalizes on fast speed, scalability, and high-quality 3D reconstruction and view synthesis capabilities of 3DGS.
arXiv Detail & Related papers (2024-10-29T15:28:15Z) - PixelGaussian: Generalizable 3D Gaussian Reconstruction from Arbitrary Views [116.10577967146762]
PixelGaussian is an efficient framework for learning generalizable 3D Gaussian reconstruction from arbitrary views.
Our method achieves state-of-the-art performance with good generalization to various numbers of views.
arXiv Detail & Related papers (2024-10-24T17:59:58Z) - MCGS: Multiview Consistency Enhancement for Sparse-View 3D Gaussian Radiance Fields [100.90743697473232]
Radiance fields represented by 3D Gaussians excel at synthesizing novel views, offering both high training efficiency and fast rendering.<n>Existing methods often incorporate depth priors from dense estimation networks but overlook the inherent multi-view consistency in input images.<n>We propose a view synthesis framework based on 3D Gaussian Splatting, enabling scene reconstruction from sparse views.
arXiv Detail & Related papers (2024-10-15T08:39:05Z) - CompGS: Efficient 3D Scene Representation via Compressed Gaussian Splatting [68.94594215660473]
We propose an efficient 3D scene representation, named Compressed Gaussian Splatting (CompGS)
We exploit a small set of anchor primitives for prediction, allowing the majority of primitives to be encapsulated into highly compact residual forms.
Experimental results show that the proposed CompGS significantly outperforms existing methods, achieving superior compactness in 3D scene representation without compromising model accuracy and rendering quality.
arXiv Detail & Related papers (2024-04-15T04:50:39Z) - LightGaussian: Unbounded 3D Gaussian Compression with 15x Reduction and 200+ FPS [55.85673901231235]
We introduce LightGaussian, a method for transforming 3D Gaussians into a more compact format.
Inspired by Network Pruning, LightGaussian identifies Gaussians with minimal global significance on scene reconstruction.
LightGaussian achieves an average 15x compression rate while boosting FPS from 144 to 237 within the 3D-GS framework.
arXiv Detail & Related papers (2023-11-28T21:39:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.