Sampling-free obstacle gradients and reactive planning in Neural
Radiance Fields (NeRF)
- URL: http://arxiv.org/abs/2205.01389v1
- Date: Tue, 3 May 2022 09:32:02 GMT
- Title: Sampling-free obstacle gradients and reactive planning in Neural
Radiance Fields (NeRF)
- Authors: Michael Pantic, Cesar Cadena, Roland Siegwart and Lionel Ott
- Abstract summary: We show that by adding the capacity to infer occupancy in a radius to a pre-trained NeRF, we are effectively learning an approximation to a Euclidean Signed Distance Field (ESDF)
Our findings allow for very fast sampling-free obstacle avoidance planning in the implicit representation.
- Score: 43.33810082237658
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work investigates the use of Neural implicit representations,
specifically Neural Radiance Fields (NeRF), for geometrical queries and motion
planning. We show that by adding the capacity to infer occupancy in a radius to
a pre-trained NeRF, we are effectively learning an approximation to a Euclidean
Signed Distance Field (ESDF). Using backward differentiation of the augmented
network, we obtain an obstacle gradient that is integrated into an obstacle
avoidance policy based on the Riemannian Motion Policies (RMP) framework. Thus,
our findings allow for very fast sampling-free obstacle avoidance planning in
the implicit representation.
Related papers
- Generalizable Non-Line-of-Sight Imaging with Learnable Physical Priors [52.195637608631955]
Non-line-of-sight (NLOS) imaging has attracted increasing attention due to its potential applications.
Existing NLOS reconstruction approaches are constrained by the reliance on empirical physical priors.
We introduce a novel learning-based solution, comprising two key designs: Learnable Path Compensation (LPC) and Adaptive Phasor Field (APF)
arXiv Detail & Related papers (2024-09-21T04:39:45Z) - ProvNeRF: Modeling per Point Provenance in NeRFs as a Stochastic Field [52.09661042881063]
We propose an approach that models the bfprovenance for each point -- i.e., the locations where it is likely visible -- of NeRFs as a text field.
We show that modeling per-point provenance during the NeRF optimization enriches the model with information on leading to improvements in novel view synthesis and uncertainty estimation.
arXiv Detail & Related papers (2024-01-16T06:19:18Z) - Where and How: Mitigating Confusion in Neural Radiance Fields from
Sparse Inputs [22.9859132310377]
We present a novel learning framework, WaH-NeRF, which effectively mitigates confusion by tackling the following challenges.
We propose a Semi-Supervised NeRF learning Paradigm based on pose perturbation and a Pixel-Patch Correspondence Loss to alleviate prediction confusion.
arXiv Detail & Related papers (2023-08-05T15:59:15Z) - Exact-NeRF: An Exploration of a Precise Volumetric Parameterization for
Neural Radiance Fields [16.870604081967866]
This paper contributes the first approach to offer a precise analytical solution to the mip-NeRF approximation.
We show that such an exact formulation Exact-NeRF matches the accuracy of mip-NeRF and furthermore provides a natural extension to more challenging scenarios without further modification.
Our contribution aims to both address the hitherto unexplored issues of frustum approximation in earlier NeRF work and additionally provide insight into the potential future consideration of analytical solutions in future NeRF extensions.
arXiv Detail & Related papers (2022-11-22T13:56:33Z) - UNeRF: Time and Memory Conscious U-Shaped Network for Training Neural
Radiance Fields [16.826691448973367]
Neural Radiance Fields (NeRFs) increase reconstruction detail for novel view synthesis and scene reconstruction.
However, the increased resolution and model-free nature of such neural fields come at the cost of high training times and excessive memory requirements.
We propose a method to exploit the redundancy of NeRF's sample-based computations by partially sharing evaluations across neighboring sample points.
arXiv Detail & Related papers (2022-06-23T19:57:07Z) - Semi-signed neural fitting for surface reconstruction from unoriented
point clouds [53.379712818791894]
We propose SSN-Fitting to reconstruct a better signed distance field.
SSN-Fitting consists of a semi-signed supervision and a loss-based region sampling strategy.
We conduct experiments to demonstrate that SSN-Fitting achieves state-of-the-art performance under different settings.
arXiv Detail & Related papers (2022-06-14T09:40:17Z) - Shortest-Path Constrained Reinforcement Learning for Sparse Reward Tasks [59.419152768018506]
We show that any optimal policy necessarily satisfies the k-SP constraint.
We propose a novel cost function that penalizes the policy violating SP constraint, instead of completely excluding it.
Our experiments on MiniGrid, DeepMind Lab, Atari, and Fetch show that the proposed method significantly improves proximal policy optimization (PPO)
arXiv Detail & Related papers (2021-07-13T21:39:21Z) - On The Verification of Neural ODEs with Stochastic Guarantees [14.490826225393096]
We show that Neural ODEs, an emerging class of timecontinuous neural networks, can be verified by solving a set of global-optimization problems.
We introduce Lagran Reachability ( SLR), an abstraction-based technique for constructing a tight Reachtube.
arXiv Detail & Related papers (2020-12-16T11:04:34Z) - On Sparsity in Overparametrised Shallow ReLU Networks [42.33056643582297]
We study the ability of different regularisation strategies to capture solutions requiring only a finite amount of neurons, even on the infinitely wide regime.
We establish that both schemes are minimised by functions having only a finite number of neurons, irrespective of the amount of overparametrisation.
arXiv Detail & Related papers (2020-06-18T01:35:26Z) - Local Propagation in Constraint-based Neural Network [77.37829055999238]
We study a constraint-based representation of neural network architectures.
We investigate a simple optimization procedure that is well suited to fulfil the so-called architectural constraints.
arXiv Detail & Related papers (2020-02-18T16:47:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.