POp-GS: Next Best View in 3D-Gaussian Splatting with P-Optimality
- URL: http://arxiv.org/abs/2503.07819v2
- Date: Tue, 25 Mar 2025 18:08:49 GMT
- Title: POp-GS: Next Best View in 3D-Gaussian Splatting with P-Optimality
- Authors: Joey Wilson, Marcelino Almeida, Sachit Mahajan, Martin Labrie, Maani Ghaffari, Omid Ghasemalizadeh, Min Sun, Cheng-Hao Kuo, Arnab Sen,
- Abstract summary: 3D-GS has proven to be a useful world model with high-quality computations, but it does not quantify uncertainty or information.<n>We propose to quantify information gain in 3D-GS by reformulating the problem through the lens of optimal experimental design.
- Score: 12.023303901740753
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we present a novel algorithm for quantifying uncertainty and information gained within 3D Gaussian Splatting (3D-GS) through P-Optimality. While 3D-GS has proven to be a useful world model with high-quality rasterizations, it does not natively quantify uncertainty or information, posing a challenge for real-world applications such as 3D-GS SLAM. We propose to quantify information gain in 3D-GS by reformulating the problem through the lens of optimal experimental design, which is a classical solution widely used in literature. By restructuring information quantification of 3D-GS through optimal experimental design, we arrive at multiple solutions, of which T-Optimality and D-Optimality perform the best quantitatively and qualitatively as measured on two popular datasets. Additionally, we propose a block diagonal covariance approximation which provides a measure of correlation at the expense of a greater computation cost.
Related papers
- Tail-Aware Post-Training Quantization for 3D Geometry Models [58.79500829118265]
Post-Training Quantization (PTQ) enables efficient inference without retraining.<n>PTQ fails to transfer effectively to 3D models due to intricate feature distributions and prohibitive calibration overhead.<n>We propose TAPTQ, a Tail-Aware Post-Training Quantization pipeline for 3D geometric learning.
arXiv Detail & Related papers (2026-02-02T07:21:15Z) - EGGS: Exchangeable 2D/3D Gaussian Splatting for Geometry-Appearance Balanced Novel View Synthesis [7.477152296481101]
Novel view synthesis (NVS) is crucial in computer vision and graphics, with wide applications in AR, VR, and autonomous driving.<n>While 3D Gaussian Splatting (3DGS) enables real-time rendering with high appearance fidelity, it suffers from multi-view inconsistencies, limiting geometric accuracy.<n>We propose Exchangeable Gaussian Splatting (EGGS), a hybrid representation that integrates 2D and 3D Gaussians to balance appearance and geometry.
arXiv Detail & Related papers (2025-12-02T17:01:00Z) - Opt3DGS: Optimizing 3D Gaussian Splatting with Adaptive Exploration and Curvature-Aware Exploitation [10.150288678666001]
3D Splatting (3DGS) has emerged as a leading framework for novel view synthesis, yet its core optimization challenges remain underexplored.<n>We identify two key issues in 3DGS optimization: entrapment in suboptimal local optima and insufficient convergence quality.<n>We propose Opt3DGS, a robust framework that enhances 3DGS through a two-stage optimization process.
arXiv Detail & Related papers (2025-11-17T16:37:33Z) - UGOD: Uncertainty-Guided Differentiable Opacity and Soft Dropout for Enhanced Sparse-View 3DGS [8.78995910690481]
3D Gaussian Splatting (3DGS) has become a competitive approach for novel view synthesis (NVS)<n>We investigate how adaptive weighting of Gaussians affects rendering quality, which is characterised by learned uncertainties proposed.<n>Our method achieves 3.27% PSNR improvements on the MipNeRF 360 dataset.
arXiv Detail & Related papers (2025-08-07T01:42:22Z) - GaussianVAE: Adaptive Learning Dynamics of 3D Gaussians for High-Fidelity Super-Resolution [7.288410309484523]
We present a novel approach for enhancing the resolution and geometric fidelity of 3D Gaussian Splatting (3DGS) beyond native training resolution.<n>Our work breaks this limitation through a lightweight generative model that predicts and refines additional 3D Gaussians where needed most.
arXiv Detail & Related papers (2025-06-09T16:13:12Z) - Bridging Geometry-Coherent Text-to-3D Generation with Multi-View Diffusion Priors and Gaussian Splatting [51.08718483081347]
We propose a framework that couples multi-view joint distribution priors to ensure geometrically consistent 3D generation.<n>We derive an effective optimization rule that effectively couples multi-view priors to guide optimization across different viewpoints.<n>We employ a deformable tetrahedral grid, from 3D-GS and refined through CSD, to produce high-quality, refined meshes.
arXiv Detail & Related papers (2025-05-07T09:12:45Z) - DashGaussian: Optimizing 3D Gaussian Splatting in 200 Seconds [71.37326848614133]
We propose DashGaussian, a scheduling scheme over the optimization complexity of 3DGS.
We show that our method accelerates the optimization of various 3DGS backbones by 45.7% on average.
arXiv Detail & Related papers (2025-03-24T07:17:27Z) - ArticulatedGS: Self-supervised Digital Twin Modeling of Articulated Objects using 3D Gaussian Splatting [29.69981069695724]
We tackle the challenge of concurrent reconstruction at the part level with the RGB appearance and estimation of motion parameters.
We reconstruct the articulated object in 3D Gaussian representations with both appearance and geometry information at the same time.
We introduce ArticulatedGS, a self-supervised, comprehensive framework that autonomously learns to model shapes and appearances at the part level.
arXiv Detail & Related papers (2025-03-11T07:56:12Z) - Does 3D Gaussian Splatting Need Accurate Volumetric Rendering? [8.421214057144569]
3D Gaussian Splatting (3DGS) is an important reference method for learning 3D representations of a captured scene.<n>NeRFs, which preceded 3DGS, are based on a principled ray-marching approach for rendering.<n>We present an in-depth analysis of the various approximations and assumptions used by the original 3DGS solution.
arXiv Detail & Related papers (2025-02-26T17:11:26Z) - CDGS: Confidence-Aware Depth Regularization for 3D Gaussian Splatting [5.8678184183132265]
CDGS is a confidence-aware depth regularization approach developed to enhance 3DGS.<n>We leverage multi-cue confidence maps of monocular depth estimation and sparse Structure-from-Motion depth to adaptively adjust depth supervision.<n>Our method demonstrates improved geometric detail preservation in early training stages and achieves competitive performance in both NVS quality and geometric accuracy.
arXiv Detail & Related papers (2025-02-20T16:12:13Z) - FeatureGS: Eigenvalue-Feature Optimization in 3D Gaussian Splatting for Geometrically Accurate and Artifact-Reduced Reconstruction [1.474723404975345]
3D Gaussian Splatting (3DGS) has emerged as a powerful approach for 3D scene reconstruction using 3D Gaussians.
We present FeatureGS, which incorporates an additional geometric loss term based on an eigenvalue-derived 3D shape feature into the optimization process of 3DGS.
arXiv Detail & Related papers (2025-01-29T13:40:25Z) - ResGS: Residual Densification of 3D Gaussian for Efficient Detail Recovery [11.706262924395768]
We introduce a novel densification operation, residual split, which adds a downscaled Gaussian as a residual.
Our approach is capable of adaptively retrieving details and complementing missing geometry.
arXiv Detail & Related papers (2024-12-10T13:19:27Z) - GeoSplatting: Towards Geometry Guided Gaussian Splatting for Physically-based Inverse Rendering [69.67264955234494]
GeoSplatting is a novel hybrid representation that augments 3DGS with explicit geometric guidance and differentiable PBR equations.
Comprehensive evaluations across diverse datasets demonstrate the superiority of GeoSplatting.
arXiv Detail & Related papers (2024-10-31T17:57:07Z) - PF3plat: Pose-Free Feed-Forward 3D Gaussian Splatting [54.7468067660037]
PF3plat sets a new state-of-the-art across all benchmarks, supported by comprehensive ablation studies validating our design choices.
Our framework capitalizes on fast speed, scalability, and high-quality 3D reconstruction and view synthesis capabilities of 3DGS.
arXiv Detail & Related papers (2024-10-29T15:28:15Z) - PUP 3D-GS: Principled Uncertainty Pruning for 3D Gaussian Splatting [59.277480452459315]
We propose a principled sensitivity pruning score that preserves visual fidelity and foreground details at significantly higher compression ratios.<n>We also propose a multi-round prune-refine pipeline that can be applied to any pretrained 3D-GS model without changing its training pipeline.
arXiv Detail & Related papers (2024-06-14T17:53:55Z) - Geometric Transformation Uncertainty for Improving 3D Fetal Brain Pose Prediction from Freehand 2D Ultrasound Videos [0.8579241568505183]
We propose an uncertainty-aware deep learning model for automated 3D plane localization in 2D fetal brain images.
Our proposed method, QAERTS, demonstrates superior pose estimation accuracy than the state-of-the-art and most of the uncertainty-based approaches.
arXiv Detail & Related papers (2024-05-21T22:42:08Z) - Calib3D: Calibrating Model Preferences for Reliable 3D Scene Understanding [55.32861154245772]
Calib3D is a pioneering effort to benchmark and scrutinize the reliability of 3D scene understanding models.<n>We comprehensively evaluate 28 state-of-the-art models across 10 diverse 3D datasets.<n>We introduce DeptS, a novel depth-aware scaling approach aimed at enhancing 3D model calibration.
arXiv Detail & Related papers (2024-03-25T17:59:59Z) - Spec-Gaussian: Anisotropic View-Dependent Appearance for 3D Gaussian Splatting [55.71424195454963]
Spec-Gaussian is an approach that utilizes an anisotropic spherical Gaussian appearance field instead of spherical harmonics.
Our experimental results demonstrate that our method surpasses existing approaches in terms of rendering quality.
This improvement extends the applicability of 3D GS to handle intricate scenarios with specular and anisotropic surfaces.
arXiv Detail & Related papers (2024-02-24T17:22:15Z) - GPS-Gaussian: Generalizable Pixel-wise 3D Gaussian Splatting for Real-time Human Novel View Synthesis [70.24111297192057]
We present a new approach, termed GPS-Gaussian, for synthesizing novel views of a character in a real-time manner.
The proposed method enables 2K-resolution rendering under a sparse-view camera setting.
arXiv Detail & Related papers (2023-12-04T18:59:55Z) - Reinforced Axial Refinement Network for Monocular 3D Object Detection [160.34246529816085]
Monocular 3D object detection aims to extract the 3D position and properties of objects from a 2D input image.
Conventional approaches sample 3D bounding boxes from the space and infer the relationship between the target object and each of them, however, the probability of effective samples is relatively small in the 3D space.
We propose to start with an initial prediction and refine it gradually towards the ground truth, with only one 3d parameter changed in each step.
This requires designing a policy which gets a reward after several steps, and thus we adopt reinforcement learning to optimize it.
arXiv Detail & Related papers (2020-08-31T17:10:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.