NSLF-OL: Online Learning of Neural Surface Light Fields alongside
Real-time Incremental 3D Reconstruction
- URL: http://arxiv.org/abs/2305.00282v1
- Date: Sat, 29 Apr 2023 15:41:15 GMT
- Title: NSLF-OL: Online Learning of Neural Surface Light Fields alongside
Real-time Incremental 3D Reconstruction
- Authors: Yijun Yuan and Andreas Nuchter
- Abstract summary: The paper proposes a novel Neural Surface Light Fields model that copes with the small range of view directions while producing a good result in unseen directions.
Our model learns online the Neural Surface Light Fields (NSLF) aside from real-time 3D reconstruction with a sequential data stream as the shared input.
In addition to online training, our model also provides real-time rendering after completing the data stream for visualization.
- Score: 0.76146285961466
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Immersive novel view generation is an important technology in the field of
graphics and has recently also received attention for operator-based
human-robot interaction. However, the involved training is time-consuming, and
thus the current test scope is majorly on object capturing. This limits the
usage of related models in the robotics community for 3D reconstruction since
robots (1) usually only capture a very small range of view directions to
surfaces that cause arbitrary predictions on unseen, novel direction, (2)
requires real-time algorithms, and (3) work with growing scenes, e.g., in
robotic exploration. The paper proposes a novel Neural Surface Light Fields
model that copes with the small range of view directions while producing a good
result in unseen directions. Exploiting recent encoding techniques, the
training of our model is highly efficient.
In addition, we design Multiple Asynchronous Neural Agents (MANA), a
universal framework to learn each small region in parallel for large-scale
growing scenes. Our model learns online the Neural Surface Light Fields (NSLF)
aside from real-time 3D reconstruction with a sequential data stream as the
shared input. In addition to online training, our model also provides real-time
rendering after completing the data stream for visualization. We implement
experiments using well-known RGBD indoor datasets, showing the high flexibility
to embed our model into real-time 3D reconstruction and demonstrating
high-fidelity view synthesis for these scenes. The code is available on github.
Related papers
- DistillNeRF: Perceiving 3D Scenes from Single-Glance Images by Distilling Neural Fields and Foundation Model Features [65.8738034806085]
DistillNeRF is a self-supervised learning framework for understanding 3D environments in autonomous driving scenes.
Our method is a generalizable feedforward model that predicts a rich neural scene representation from sparse, single-frame multi-view camera inputs.
arXiv Detail & Related papers (2024-06-17T21:15:13Z) - RGB-D Mapping and Tracking in a Plenoxel Radiance Field [5.239559610798646]
We present the vital differences between view synthesis models and 3D reconstruction models.
We also comment on why a depth sensor is essential for modeling accurate geometry in general outward-facing scenes.
Our method achieves state-of-the-art results in both mapping and tracking tasks, while also being faster than competing neural network-based approaches.
arXiv Detail & Related papers (2023-07-07T06:05:32Z) - Neural Scene Representation for Locomotion on Structured Terrain [56.48607865960868]
We propose a learning-based method to reconstruct the local terrain for a mobile robot traversing urban environments.
Using a stream of depth measurements from the onboard cameras and the robot's trajectory, the estimates the topography in the robot's vicinity.
We propose a 3D reconstruction model that faithfully reconstructs the scene, despite the noisy measurements and large amounts of missing data coming from the blind spots of the camera arrangement.
arXiv Detail & Related papers (2022-06-16T10:45:17Z) - DeVRF: Fast Deformable Voxel Radiance Fields for Dynamic Scenes [27.37830742693236]
We present DeVRF, a novel representation to accelerate learning dynamic radiance fields.
Experiments demonstrate that DeVRF achieves two orders of magnitude speedup with on-par high-fidelity results.
arXiv Detail & Related papers (2022-05-31T12:13:54Z) - Learning Dynamic View Synthesis With Few RGBD Cameras [60.36357774688289]
We propose to utilize RGBD cameras to synthesize free-viewpoint videos of dynamic indoor scenes.
We generate point clouds from RGBD frames and then render them into free-viewpoint videos via a neural feature.
We introduce a simple Regional Depth-Inpainting module that adaptively inpaints missing depth values to render complete novel views.
arXiv Detail & Related papers (2022-04-22T03:17:35Z) - Simple and Effective Synthesis of Indoor 3D Scenes [78.95697556834536]
We study the problem of immersive 3D indoor scenes from one or more images.
Our aim is to generate high-resolution images and videos from novel viewpoints.
We propose an image-to-image GAN that maps directly from reprojections of incomplete point clouds to full high-resolution RGB-D images.
arXiv Detail & Related papers (2022-04-06T17:54:46Z) - Learning Multi-Object Dynamics with Compositional Neural Radiance Fields [63.424469458529906]
We present a method to learn compositional predictive models from image observations based on implicit object encoders, Neural Radiance Fields (NeRFs), and graph neural networks.
NeRFs have become a popular choice for representing scenes due to their strong 3D prior.
For planning, we utilize RRTs in the learned latent space, where we can exploit our model and the implicit object encoder to make sampling the latent space informative and more efficient.
arXiv Detail & Related papers (2022-02-24T01:31:29Z) - 3D Neural Scene Representations for Visuomotor Control [78.79583457239836]
We learn models for dynamic 3D scenes purely from 2D visual observations.
A dynamics model, constructed over the learned representation space, enables visuomotor control for challenging manipulation tasks.
arXiv Detail & Related papers (2021-07-08T17:49:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.