OUGS: Active View Selection via Object-aware Uncertainty Estimation in 3DGS
- URL: http://arxiv.org/abs/2511.09397v1
- Date: Thu, 13 Nov 2025 01:52:14 GMT
- Title: OUGS: Active View Selection via Object-aware Uncertainty Estimation in 3DGS
- Authors: Haiyi Li, Qi Chen, Denis Kalkofen, Hsiang-Ting Chen,
- Abstract summary: OUGS is a principled, physically-grounded uncertainty formulation for 3DGS.<n>Our core innovation is to derive uncertainty directly from the explicit physical parameters of the 3D Gaussian primitives.<n>This foundation allows us to then seamlessly integrate semantic segmentation masks to produce a targeted, object-aware uncertainty score.
- Score: 14.124481717283544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in 3D Gaussian Splatting (3DGS) have achieved state-of-the-art results for novel view synthesis. However, efficiently capturing high-fidelity reconstructions of specific objects within complex scenes remains a significant challenge. A key limitation of existing active reconstruction methods is their reliance on scene-level uncertainty metrics, which are often biased by irrelevant background clutter and lead to inefficient view selection for object-centric tasks. We present OUGS, a novel framework that addresses this challenge with a more principled, physically-grounded uncertainty formulation for 3DGS. Our core innovation is to derive uncertainty directly from the explicit physical parameters of the 3D Gaussian primitives (e.g., position, scale, rotation). By propagating the covariance of these parameters through the rendering Jacobian, we establish a highly interpretable uncertainty model. This foundation allows us to then seamlessly integrate semantic segmentation masks to produce a targeted, object-aware uncertainty score that effectively disentangles the object from its environment. This allows for a more effective active view selection strategy that prioritizes views critical to improving object fidelity. Experimental evaluations on public datasets demonstrate that our approach significantly improves the efficiency of the 3DGS reconstruction process and achieves higher quality for targeted objects compared to existing state-of-the-art methods, while also serving as a robust uncertainty estimator for the global scene.
Related papers
- IoUCert: Robustness Verification for Anchor-based Object Detectors [58.35703549470485]
We introduce IoUCert, a novel formal verification framework designed specifically to overcome these bottlenecks in anchor-based object detection architectures.<n>We show that our method enables the robustness verification of realistic, anchor-based models including SSD, YOLOv2, and YOLOv3 variants against various input perturbations.
arXiv Detail & Related papers (2026-03-03T14:36:46Z) - Informative Object-centric Next Best View for Object-aware 3D Gaussian Splatting in Cluttered Scenes [7.75982375074512]
We introduce an instance-aware Next Best View (NBV) policy that prioritizes underexplored regions by leveraging object features.<n>Specifically, our object-aware 3DGS distills instancelevel information into one-hot object vectors, which are used to compute confidence-weighted information gain.<n>Experiments demonstrate that our NBV policy reduces depth error by up to 77.14% on the synthetic dataset and 34.10% on the real-world GraspNet dataset.
arXiv Detail & Related papers (2026-02-09T04:50:36Z) - Perceptual Quality Assessment of 3D Gaussian Splatting: A Subjective Dataset and Prediction Metric [76.66966098297986]
We present 3DGS-QA, the first subjective quality assessment dataset for 3DGS.<n>It comprises 225 degraded reconstructions across 15 object types, enabling a controlled investigation of common distortion factors.<n>Our model extracts spatial and photometric cues from the Gaussian representation to estimate perceived quality in a structure-aware manner.
arXiv Detail & Related papers (2025-11-11T09:34:20Z) - RobustSplat: Decoupling Densification and Dynamics for Transient-Free 3DGS [79.15416002879239]
3D Gaussian Splatting has gained significant attention for its real-time, photo-realistic rendering in novel-view synthesis and 3D modeling.<n>Existing methods struggle with accurately modeling scenes affected by transient objects, leading to artifacts in the rendered images.<n>We propose RobustSplat, a robust solution based on two critical designs.
arXiv Detail & Related papers (2025-06-03T11:13:48Z) - Uncertainty-Aware Normal-Guided Gaussian Splatting for Surface Reconstruction from Sparse Image Sequences [21.120659841877508]
3D Gaussian Splatting (3DGS) has achieved impressive rendering performance in novel view synthesis.<n>We propose Uncertainty-aware Normal-Guided Gaussian Splatting (UNG-GS) to quantify geometric uncertainty within the 3DGS pipeline.<n>UNG-GS significantly outperforms state-of-the-art methods in both sparse and dense sequences.
arXiv Detail & Related papers (2025-03-14T08:18:12Z) - RW-Net: Enhancing Few-Shot Point Cloud Classification with a Wavelet Transform Projection-based Network [6.305913808037513]
This work introduces RW-Net, a novel framework designed to address the challenges above by integrating Rate-Distortion Explanation (RDE) and wavelet transform.<n>By emphasizing low-frequency components of the input data, the wavelet transform captures fundamental geometric and structural attributes of 3D objects.<n>The results demonstrate that our approach achieves state-of-the-art performance and exhibits superior generalization and robustness in few-shot learning scenarios.
arXiv Detail & Related papers (2025-01-06T18:55:59Z) - Exploring Test-Time Adaptation for Object Detection in Continually Changing Environments [20.307151769610087]
Continual Test-Time Adaptation (CTTA) has emerged as a promising technique to gradually adapt a source-trained model to continually changing target domains.<n>We present AMROD, featuring three core components, to tackle these challenges for detection models in CTTA scenarios.<n>We demonstrate the effectiveness of AMROD on four CTTA object detection tasks, where AMROD outperforms existing methods.
arXiv Detail & Related papers (2024-06-24T08:30:03Z) - Hardness-Aware Scene Synthesis for Semi-Supervised 3D Object Detection [59.33188668341604]
3D object detection serves as the fundamental task of autonomous driving perception.
It is costly to obtain high-quality annotations for point cloud data.
We propose a hardness-aware scene synthesis (HASS) method to generate adaptive synthetic scenes.
arXiv Detail & Related papers (2024-05-27T17:59:23Z) - Instance-aware Multi-Camera 3D Object Detection with Structural Priors
Mining and Self-Boosting Learning [93.71280187657831]
Camera-based bird-eye-view (BEV) perception paradigm has made significant progress in the autonomous driving field.
We propose IA-BEV, which integrates image-plane instance awareness into the depth estimation process within a BEV-based detector.
arXiv Detail & Related papers (2023-12-13T09:24:42Z) - Multi-view 3D Object Reconstruction and Uncertainty Modelling with
Neural Shape Prior [9.716201630968433]
3D object reconstruction is important for semantic scene understanding.
It is challenging to reconstruct detailed 3D shapes from monocular images directly due to a lack of depth information, occlusion and noise.
We tackle this problem by leveraging a neural object representation which learns an object shape distribution from large dataset of 3d object models and maps it into a latent space.
We propose a method to model uncertainty as part of the representation and define an uncertainty-aware encoder which generates latent codes with uncertainty directly from individual input images.
arXiv Detail & Related papers (2023-06-17T03:25:13Z) - Secrets of 3D Implicit Object Shape Reconstruction in the Wild [92.5554695397653]
Reconstructing high-fidelity 3D objects from sparse, partial observation is crucial for various applications in computer vision, robotics, and graphics.
Recent neural implicit modeling methods show promising results on synthetic or dense datasets.
But, they perform poorly on real-world data that is sparse and noisy.
This paper analyzes the root cause of such deficient performance of a popular neural implicit model.
arXiv Detail & Related papers (2021-01-18T03:24:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.