PlatoNeRF: 3D Reconstruction in Plato's Cave via Single-View Two-Bounce Lidar
- URL: http://arxiv.org/abs/2312.14239v2
- Date: Fri, 5 Apr 2024 15:00:58 GMT
- Title: PlatoNeRF: 3D Reconstruction in Plato's Cave via Single-View Two-Bounce Lidar
- Authors: Tzofi Klinghoffer, Xiaoyu Xiang, Siddharth Somasundaram, Yuchen Fan, Christian Richardt, Ramesh Raskar, Rakesh Ranjan,
- Abstract summary: 3D reconstruction from a single-view is challenging because of the ambiguity from monocular cues and lack of information about occluded regions.
We propose using time-of-flight data captured by a single-photon avalanche diode to overcome these limitations.
We demonstrate that we can reconstruct visible and occluded geometry without data priors or reliance on controlled ambient lighting or scene albedo.
- Score: 25.332440946211236
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D reconstruction from a single-view is challenging because of the ambiguity from monocular cues and lack of information about occluded regions. Neural radiance fields (NeRF), while popular for view synthesis and 3D reconstruction, are typically reliant on multi-view images. Existing methods for single-view 3D reconstruction with NeRF rely on either data priors to hallucinate views of occluded regions, which may not be physically accurate, or shadows observed by RGB cameras, which are difficult to detect in ambient light and low albedo backgrounds. We propose using time-of-flight data captured by a single-photon avalanche diode to overcome these limitations. Our method models two-bounce optical paths with NeRF, using lidar transient data for supervision. By leveraging the advantages of both NeRF and two-bounce light measured by lidar, we demonstrate that we can reconstruct visible and occluded geometry without data priors or reliance on controlled ambient lighting or scene albedo. In addition, we demonstrate improved generalization under practical constraints on sensor spatial- and temporal-resolution. We believe our method is a promising direction as single-photon lidars become ubiquitous on consumer devices, such as phones, tablets, and headsets.
Related papers
- Transientangelo: Few-Viewpoint Surface Reconstruction Using Single-Photon Lidar [8.464054039931245]
Lidar captures 3D scene geometry by emitting pulses of light to a target and recording the speed-of-light time delay of the reflected light.
conventional lidar systems do not output the raw, captured waveforms of backscattered light.
We develop new regularization strategies that improve robustness to photon noise, enabling accurate surface reconstruction with as few as 10 photons per pixel.
arXiv Detail & Related papers (2024-08-22T08:12:09Z) - DistillNeRF: Perceiving 3D Scenes from Single-Glance Images by Distilling Neural Fields and Foundation Model Features [65.8738034806085]
DistillNeRF is a self-supervised learning framework for understanding 3D environments in autonomous driving scenes.
Our method is a generalizable feedforward model that predicts a rich neural scene representation from sparse, single-frame multi-view camera inputs.
arXiv Detail & Related papers (2024-06-17T21:15:13Z) - StableDreamer: Taming Noisy Score Distillation Sampling for Text-to-3D [88.66678730537777]
We present StableDreamer, a methodology incorporating three advances.
First, we formalize the equivalence of the SDS generative prior and a simple supervised L2 reconstruction loss.
Second, our analysis shows that while image-space diffusion contributes to geometric precision, latent-space diffusion is crucial for vivid color rendition.
arXiv Detail & Related papers (2023-12-02T02:27:58Z) - Transient Neural Radiance Fields for Lidar View Synthesis and 3D Reconstruction [12.86184159775286]
We propose a novel method for rendering transient NeRFs that take as input the raw, time-resolved photon count histograms measured by a single-photon lidar system.
We evaluate our method on a first-of-its-kind dataset of simulated and captured transient multiview scans from a prototype single-photon lidar.
arXiv Detail & Related papers (2023-07-14T15:17:04Z) - Clean-NeRF: Reformulating NeRF to account for View-Dependent
Observations [67.54358911994967]
This paper proposes Clean-NeRF for accurate 3D reconstruction and novel view rendering in complex scenes.
Clean-NeRF can be implemented as a plug-in that can immediately benefit existing NeRF-based methods without additional input.
arXiv Detail & Related papers (2023-03-26T12:24:31Z) - Aleth-NeRF: Low-light Condition View Synthesis with Concealing Fields [65.96818069005145]
Vanilla NeRF is viewer-centred simplifies the rendering process only as light emission from 3D locations in the viewing direction.
Inspired by the emission theory of ancient Greeks, we make slight modifications on vanilla NeRF to train on multiple views of low-light scenes.
We introduce a surrogate concept, Concealing Fields, that reduces the transport of light during the volume rendering stage.
arXiv Detail & Related papers (2023-03-10T09:28:09Z) - 3D Scene Inference from Transient Histograms [17.916392079019175]
Time-resolved image sensors that capture light at pico-to-nanosecond were once limited to niche applications.
We propose low-cost and low-power imaging modalities that capture scene information from minimal time-resolved image sensors.
arXiv Detail & Related papers (2022-11-09T18:31:50Z) - Urban Radiance Fields [77.43604458481637]
We perform 3D reconstruction and novel view synthesis from data captured by scanning platforms commonly deployed for world mapping in urban outdoor environments.
Our approach extends Neural Radiance Fields, which has been demonstrated to synthesize realistic novel images for small scenes in controlled settings.
Each of these three extensions provides significant performance improvements in experiments on Street View data.
arXiv Detail & Related papers (2021-11-29T15:58:16Z) - Depth-supervised NeRF: Fewer Views and Faster Training for Free [69.34556647743285]
DS-NeRF (Depth-supervised Neural Radiance Fields) is a loss for learning fields that takes advantage of readily-available depth supervision.
We show that our loss is compatible with other recently proposed NeRF methods, demonstrating that depth is a cheap and easily digestible supervisory signal.
arXiv Detail & Related papers (2021-07-06T17:58:35Z) - Shadow Neural Radiance Fields for Multi-view Satellite Photogrammetry [1.370633147306388]
We present a new generic method for shadow-aware multi-view satellite photogrammetry of Earth Observation scenes.
Our proposed method, the Shadow Neural Radiance Field (S-NeRF), follows recent advances in implicit volumetric representation learning.
For each scene, we train S-NeRF using very high spatial resolution optical images taken from known viewing angles. The learning requires no labels or shape priors: it is self-supervised by an image reconstruction loss.
arXiv Detail & Related papers (2021-04-20T10:17:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.