Neural Fields in Robotics: A Survey
- URL: http://arxiv.org/abs/2410.20220v1
- Date: Sat, 26 Oct 2024 16:26:41 GMT
- Title: Neural Fields in Robotics: A Survey
- Authors: Muhammad Zubair Irshad, Mauro Comi, Yen-Chen Lin, Nick Heppert, Abhinav Valada, Rares Ambrus, Zsolt Kira, Jonathan Tremblay,
- Abstract summary: Neural Fields have emerged as a transformative approach for 3D scene representation in computer vision and robotics.
This survey explores their applications in robotics, emphasizing their potential to enhance perception, planning, and control.
Their compactness, memory efficiency, and differentiability, along with seamless integration with foundation and generative models, make them ideal for real-time applications.
- Score: 39.93473561102639
- License:
- Abstract: Neural Fields have emerged as a transformative approach for 3D scene representation in computer vision and robotics, enabling accurate inference of geometry, 3D semantics, and dynamics from posed 2D data. Leveraging differentiable rendering, Neural Fields encompass both continuous implicit and explicit neural representations enabling high-fidelity 3D reconstruction, integration of multi-modal sensor data, and generation of novel viewpoints. This survey explores their applications in robotics, emphasizing their potential to enhance perception, planning, and control. Their compactness, memory efficiency, and differentiability, along with seamless integration with foundation and generative models, make them ideal for real-time applications, improving robot adaptability and decision-making. This paper provides a thorough review of Neural Fields in robotics, categorizing applications across various domains and evaluating their strengths and limitations, based on over 200 papers. First, we present four key Neural Fields frameworks: Occupancy Networks, Signed Distance Fields, Neural Radiance Fields, and Gaussian Splatting. Second, we detail Neural Fields' applications in five major robotics domains: pose estimation, manipulation, navigation, physics, and autonomous driving, highlighting key works and discussing takeaways and open challenges. Finally, we outline the current limitations of Neural Fields in robotics and propose promising directions for future research. Project page: https://robonerf.github.io
Related papers
- NeRF in Robotics: A Survey [95.11502610414803]
The recent emergence of neural implicit representations has introduced radical innovation to computer vision and robotics fields.
NeRF has sparked a trend because of the huge representational advantages, such as simplified mathematical models, compact environment storage, and continuous scene representations.
arXiv Detail & Related papers (2024-05-02T14:38:18Z) - Object Registration in Neural Fields [6.361537379901403]
We provide an expanded analysis of the recent Reg-NF neural field registration method and its use-cases within a robotics context.
We showcase the scenario of determining the 6-DoF pose of known objects within a scene using scene and object neural field models.
We show how this may be used to better represent objects within imperfectly modelled scenes and generate new scenes by substituting object neural field models into the scene.
arXiv Detail & Related papers (2024-04-29T02:33:40Z) - Robo360: A 3D Omnispective Multi-Material Robotic Manipulation Dataset [26.845899347446807]
Recent interest in leveraging 3D algorithms has led to advancements in robot perception and physical understanding.
We present Robo360, a dataset that features robotic manipulation with a dense view coverage.
We hope that Robo360 can open new research directions yet to be explored at the intersection of understanding the physical world in 3D and robot control.
arXiv Detail & Related papers (2023-12-09T09:12:03Z) - NSLF-OL: Online Learning of Neural Surface Light Fields alongside
Real-time Incremental 3D Reconstruction [0.76146285961466]
The paper proposes a novel Neural Surface Light Fields model that copes with the small range of view directions while producing a good result in unseen directions.
Our model learns online the Neural Surface Light Fields (NSLF) aside from real-time 3D reconstruction with a sequential data stream as the shared input.
In addition to online training, our model also provides real-time rendering after completing the data stream for visualization.
arXiv Detail & Related papers (2023-04-29T15:41:15Z) - ExAug: Robot-Conditioned Navigation Policies via Geometric Experience
Augmentation [73.63212031963843]
We propose a novel framework, ExAug, to augment the experiences of different robot platforms from multiple datasets in diverse environments.
The trained policy is evaluated on two new robot platforms with three different cameras in indoor and outdoor environments with obstacles.
arXiv Detail & Related papers (2022-10-14T01:32:15Z) - Learning Multi-Object Dynamics with Compositional Neural Radiance Fields [63.424469458529906]
We present a method to learn compositional predictive models from image observations based on implicit object encoders, Neural Radiance Fields (NeRFs), and graph neural networks.
NeRFs have become a popular choice for representing scenes due to their strong 3D prior.
For planning, we utilize RRTs in the learned latent space, where we can exploit our model and the implicit object encoder to make sampling the latent space informative and more efficient.
arXiv Detail & Related papers (2022-02-24T01:31:29Z) - Neural Fields in Visual Computing and Beyond [54.950885364735804]
Recent advances in machine learning have created increasing interest in solving visual computing problems using coordinate-based neural networks.
neural fields have seen successful application in the synthesis of 3D shapes and image, animation of human bodies, 3D reconstruction, and pose estimation.
This report provides context, mathematical grounding, and an extensive review of literature on neural fields.
arXiv Detail & Related papers (2021-11-22T18:57:51Z) - 3D Neural Scene Representations for Visuomotor Control [78.79583457239836]
We learn models for dynamic 3D scenes purely from 2D visual observations.
A dynamics model, constructed over the learned representation space, enables visuomotor control for challenging manipulation tasks.
arXiv Detail & Related papers (2021-07-08T17:49:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.