Active Implicit Object Reconstruction using Uncertainty-guided Next-Best-View Optimization
- URL: http://arxiv.org/abs/2303.16739v4
- Date: Tue, 28 May 2024 07:38:39 GMT
- Title: Active Implicit Object Reconstruction using Uncertainty-guided Next-Best-View Optimization
- Authors: Dongyu Yan, Jianheng Liu, Fengyu Quan, Haoyao Chen, Mengmeng Fu,
- Abstract summary: Actively planning sensor views during object reconstruction is crucial for autonomous mobile robots.
We propose a seamless integration of the emerging implicit representation with the active reconstruction task.
Our approach effectively improves reconstruction accuracy and efficiency of view planning in active reconstruction tasks.
- Score: 1.2268315442962412
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Actively planning sensor views during object reconstruction is crucial for autonomous mobile robots. An effective method should be able to strike a balance between accuracy and efficiency. In this paper, we propose a seamless integration of the emerging implicit representation with the active reconstruction task. We build an implicit occupancy field as our geometry proxy. While training, the prior object bounding box is utilized as auxiliary information to generate clean and detailed reconstructions. To evaluate view uncertainty, we employ a sampling-based approach that directly extracts entropy from the reconstructed occupancy probability field as our measure of view information gain. This eliminates the need for additional uncertainty maps or learning. Unlike previous methods that compare view uncertainty within a finite set of candidates, we aim to find the next-best-view (NBV) on a continuous manifold. Leveraging the differentiability of the implicit representation, the NBV can be optimized directly by maximizing the view uncertainty using gradient descent. It significantly enhances the method's adaptability to different scenarios. Simulation and real-world experiments demonstrate that our approach effectively improves reconstruction accuracy and efficiency of view planning in active reconstruction tasks. The proposed system will open source at https://github.com/HITSZ-NRSL/ActiveImplicitRecon.git.
Related papers
- ActiveSplat: High-Fidelity Scene Reconstruction through Active Gaussian Splatting [12.628559736243536]
We propose ActiveSplat, an autonomous high-fidelity reconstruction system leveraging Gaussian splatting.
The system establishes a unified framework for online mapping, viewpoint selection, and path planning.
Experiments and ablation studies validate the efficacy of the proposed method in terms of reconstruction accuracy, data coverage, and exploration efficiency.
arXiv Detail & Related papers (2024-10-29T11:18:04Z) - STAIR: Semantic-Targeted Active Implicit Reconstruction [23.884933841874908]
Actively reconstructing objects of interest, i.e. objects with specific semantic meanings, is relevant for a robot to perform downstream tasks.
We propose a novel framework for semantic-targeted active reconstruction using posed RGB-D measurements and 2D semantic labels as input.
arXiv Detail & Related papers (2024-03-17T14:42:05Z) - Model Checking for Closed-Loop Robot Reactive Planning [0.0]
We show how model checking can be used to create multistep plans for a differential drive wheeled robot so that it can avoid immediate danger.
Using a small, purpose built model checking algorithm in situ we generate plans in real-time in a way that reflects the egocentric reactive response of simple biological agents.
arXiv Detail & Related papers (2023-11-16T11:02:29Z) - Consensus-Adaptive RANSAC [104.87576373187426]
We propose a new RANSAC framework that learns to explore the parameter space by considering the residuals seen so far via a novel attention layer.
The attention mechanism operates on a batch of point-to-model residuals, and updates a per-point estimation state to take into account the consensus found through a lightweight one-step transformer.
arXiv Detail & Related papers (2023-07-26T08:25:46Z) - CPPF++: Uncertainty-Aware Sim2Real Object Pose Estimation by Vote Aggregation [67.12857074801731]
We introduce a novel method, CPPF++, designed for sim-to-real pose estimation.
To address the challenge posed by vote collision, we propose a novel approach that involves modeling the voting uncertainty.
We incorporate several innovative modules, including noisy pair filtering, online alignment optimization, and a feature ensemble.
arXiv Detail & Related papers (2022-11-24T03:27:00Z) - Information-Theoretic Odometry Learning [83.36195426897768]
We propose a unified information theoretic framework for learning-motivated methods aimed at odometry estimation.
The proposed framework provides an elegant tool for performance evaluation and understanding in information-theoretic language.
arXiv Detail & Related papers (2022-03-11T02:37:35Z) - Don't Lie to Me! Robust and Efficient Explainability with Verified
Perturbation Analysis [6.15738282053772]
We introduce EVA -- the first explainability method guarantee to have an exhaustive exploration of a perturbation space.
We leverage the beneficial properties of verified perturbation analysis to efficiently characterize the input variables that are most likely to drive the model decision.
arXiv Detail & Related papers (2022-02-15T21:13:55Z) - Dynamic Iterative Refinement for Efficient 3D Hand Pose Estimation [87.54604263202941]
We propose a tiny deep neural network of which partial layers are iteratively exploited for refining its previous estimations.
We employ learned gating criteria to decide whether to exit from the weight-sharing loop, allowing per-sample adaptation in our model.
Our method consistently outperforms state-of-the-art 2D/3D hand pose estimation approaches in terms of both accuracy and efficiency for widely used benchmarks.
arXiv Detail & Related papers (2021-11-11T23:31:34Z) - An Adaptive Framework for Learning Unsupervised Depth Completion [59.17364202590475]
We present a method to infer a dense depth map from a color image and associated sparse depth measurements.
We show that regularization and co-visibility are related via the fitness of the model to data and can be unified into a single framework.
arXiv Detail & Related papers (2021-06-06T02:27:55Z) - Robust Value Iteration for Continuous Control Tasks [99.00362538261972]
When transferring a control policy from simulation to a physical system, the policy needs to be robust to variations in the dynamics to perform well.
We present Robust Fitted Value Iteration, which uses dynamic programming to compute the optimal value function on the compact state domain.
We show that robust value is more robust compared to deep reinforcement learning algorithm and the non-robust version of the algorithm.
arXiv Detail & Related papers (2021-05-25T19:48:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.