PlatoNeRF: 3D Reconstruction in Plato's Cave via Single-View Two-Bounce Lidar
- URL: http://arxiv.org/abs/2312.14239v2
- Date: Fri, 5 Apr 2024 15:00:58 GMT
- Title: PlatoNeRF: 3D Reconstruction in Plato's Cave via Single-View Two-Bounce Lidar
- Authors: Tzofi Klinghoffer, Xiaoyu Xiang, Siddharth Somasundaram, Yuchen Fan, Christian Richardt, Ramesh Raskar, Rakesh Ranjan,
- Abstract summary: 3D reconstruction from a single-view is challenging because of the ambiguity from monocular cues and lack of information about occluded regions.
We propose using time-of-flight data captured by a single-photon avalanche diode to overcome these limitations.
We demonstrate that we can reconstruct visible and occluded geometry without data priors or reliance on controlled ambient lighting or scene albedo.
- Score: 25.332440946211236
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D reconstruction from a single-view is challenging because of the ambiguity from monocular cues and lack of information about occluded regions. Neural radiance fields (NeRF), while popular for view synthesis and 3D reconstruction, are typically reliant on multi-view images. Existing methods for single-view 3D reconstruction with NeRF rely on either data priors to hallucinate views of occluded regions, which may not be physically accurate, or shadows observed by RGB cameras, which are difficult to detect in ambient light and low albedo backgrounds. We propose using time-of-flight data captured by a single-photon avalanche diode to overcome these limitations. Our method models two-bounce optical paths with NeRF, using lidar transient data for supervision. By leveraging the advantages of both NeRF and two-bounce light measured by lidar, we demonstrate that we can reconstruct visible and occluded geometry without data priors or reliance on controlled ambient lighting or scene albedo. In addition, we demonstrate improved generalization under practical constraints on sensor spatial- and temporal-resolution. We believe our method is a promising direction as single-photon lidars become ubiquitous on consumer devices, such as phones, tablets, and headsets.
Related papers
- StableDreamer: Taming Noisy Score Distillation Sampling for Text-to-3D [88.66678730537777]
We present StableDreamer, a methodology incorporating three advances.
First, we formalize the equivalence of the SDS generative prior and a simple supervised L2 reconstruction loss.
Second, our analysis shows that while image-space diffusion contributes to geometric precision, latent-space diffusion is crucial for vivid color rendition.
arXiv Detail & Related papers (2023-12-02T02:27:58Z) - Transient Neural Radiance Fields for Lidar View Synthesis and 3D Reconstruction [12.86184159775286]
We propose a novel method for rendering transient NeRFs that take as input the raw, time-resolved photon count histograms measured by a single-photon lidar system.
We evaluate our method on a first-of-its-kind dataset of simulated and captured transient multiview scans from a prototype single-photon lidar.
arXiv Detail & Related papers (2023-07-14T15:17:04Z) - Clean-NeRF: Reformulating NeRF to account for View-Dependent
Observations [67.54358911994967]
This paper proposes Clean-NeRF for accurate 3D reconstruction and novel view rendering in complex scenes.
Clean-NeRF can be implemented as a plug-in that can immediately benefit existing NeRF-based methods without additional input.
arXiv Detail & Related papers (2023-03-26T12:24:31Z) - Aleth-NeRF: Low-light Condition View Synthesis with Concealing Fields [65.96818069005145]
Vanilla NeRF is viewer-centred simplifies the rendering process only as light emission from 3D locations in the viewing direction.
Inspired by the emission theory of ancient Greeks, we make slight modifications on vanilla NeRF to train on multiple views of low-light scenes.
We introduce a surrogate concept, Concealing Fields, that reduces the transport of light during the volume rendering stage.
arXiv Detail & Related papers (2023-03-10T09:28:09Z) - 3D Scene Inference from Transient Histograms [17.916392079019175]
Time-resolved image sensors that capture light at pico-to-nanosecond were once limited to niche applications.
We propose low-cost and low-power imaging modalities that capture scene information from minimal time-resolved image sensors.
arXiv Detail & Related papers (2022-11-09T18:31:50Z) - Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level
Physically-Grounded Augmentations [111.08941206369508]
We propose Augmented NeRF (Aug-NeRF), which for the first time brings the power of robust data augmentations into regularizing the NeRF training.
Our proposal learns to seamlessly blend worst-case perturbations into three distinct levels of the NeRF pipeline.
Aug-NeRF effectively boosts NeRF performance in both novel view synthesis and underlying geometry reconstruction.
arXiv Detail & Related papers (2022-07-04T02:27:07Z) - Urban Radiance Fields [77.43604458481637]
We perform 3D reconstruction and novel view synthesis from data captured by scanning platforms commonly deployed for world mapping in urban outdoor environments.
Our approach extends Neural Radiance Fields, which has been demonstrated to synthesize realistic novel images for small scenes in controlled settings.
Each of these three extensions provides significant performance improvements in experiments on Street View data.
arXiv Detail & Related papers (2021-11-29T15:58:16Z) - T\"oRF: Time-of-Flight Radiance Fields for Dynamic Scene View Synthesis [32.878225196378374]
We introduce a neural representation based on an image formation model for continuous-wave ToF cameras.
We show that this approach improves robustness of dynamic scene reconstruction to erroneous calibration and large motions.
arXiv Detail & Related papers (2021-09-30T17:12:59Z) - Depth-supervised NeRF: Fewer Views and Faster Training for Free [66.16386801362643]
DS-NeRF is a loss for learning neural radiance fields that takes advantage of readily-available depth supervision.
We find that DS-NeRF can render more accurate images given fewer training views while training 2-6x faster.
arXiv Detail & Related papers (2021-07-06T17:58:35Z) - Shadow Neural Radiance Fields for Multi-view Satellite Photogrammetry [1.370633147306388]
We present a new generic method for shadow-aware multi-view satellite photogrammetry of Earth Observation scenes.
Our proposed method, the Shadow Neural Radiance Field (S-NeRF), follows recent advances in implicit volumetric representation learning.
For each scene, we train S-NeRF using very high spatial resolution optical images taken from known viewing angles. The learning requires no labels or shape priors: it is self-supervised by an image reconstruction loss.
arXiv Detail & Related papers (2021-04-20T10:17:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.