Non-line-of-Sight Imaging via Neural Transient Fields
- URL: http://arxiv.org/abs/2101.00373v2
- Date: Tue, 5 Jan 2021 05:28:14 GMT
- Title: Non-line-of-Sight Imaging via Neural Transient Fields
- Authors: Siyuan Shen, Zi Wang, Ping Liu, Zhengqing Pan, Ruiqian Li, Tian Gao,
Shiying Li, and Jingyi Yu
- Abstract summary: We present a neural modeling framework for Non-Line-of-Sight (NLOS) imaging.
Inspired by the recent Neural Radiance Field (NeRF) approach, we use a multi-layer perceptron (MLP) to represent the neural transient field or NeTF.
We formulate a spherical volume NeTF reconstruction pipeline, applicable to both confocal and non-confocal setups.
- Score: 52.91826472034646
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We present a neural modeling framework for Non-Line-of-Sight (NLOS) imaging.
Previous solutions have sought to explicitly recover the 3D geometry (e.g., as
point clouds) or voxel density (e.g., within a pre-defined volume) of the
hidden scene. In contrast, inspired by the recent Neural Radiance Field (NeRF)
approach, we use a multi-layer perceptron (MLP) to represent the neural
transient field or NeTF. However, NeTF measures the transient over spherical
wavefronts rather than the radiance along lines. We therefore formulate a
spherical volume NeTF reconstruction pipeline, applicable to both confocal and
non-confocal setups. Compared with NeRF, NeTF samples a much sparser set of
viewpoints (scanning spots) and the sampling is highly uneven. We thus
introduce a Monte Carlo technique to improve the robustness in the
reconstruction. Comprehensive experiments on synthetic and real datasets
demonstrate NeTF provides higher quality reconstruction and preserves fine
details largely missing in the state-of-the-art.
Related papers
- NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from
3D-aware Diffusion [107.67277084886929]
Novel view synthesis from a single image requires inferring occluded regions of objects and scenes whilst simultaneously maintaining semantic and physical consistency with the input.
We propose NerfDiff, which addresses this issue by distilling the knowledge of a 3D-aware conditional diffusion model (CDM) into NeRF through synthesizing and refining a set of virtual views at test time.
We further propose a novel NeRF-guided distillation algorithm that simultaneously generates 3D consistent virtual views from the CDM samples, and finetunes the NeRF based on the improved virtual views.
arXiv Detail & Related papers (2023-02-20T17:12:00Z) - AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware
Training [100.33713282611448]
We conduct the first pilot study on training NeRF with high-resolution data.
We propose the corresponding solutions, including marrying the multilayer perceptron with convolutional layers.
Our approach is nearly free without introducing obvious training/testing costs.
arXiv Detail & Related papers (2022-11-17T17:22:28Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z) - PERF: Performant, Explicit Radiance Fields [1.933681537640272]
We present a novel way of approaching image-based 3D reconstruction based on radiance fields.
The problem of volumetric reconstruction is formulated as a non-linear least-squares problem and solved explicitly without the use of neural networks.
arXiv Detail & Related papers (2021-12-10T15:29:00Z) - UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for
Multi-View Reconstruction [61.17219252031391]
We present a novel method for reconstructing surfaces from multi-view images using Neural implicit 3D representations.
Our key insight is that implicit surface models and radiance fields can be formulated in a unified way, enabling both surface and volume rendering.
Our experiments demonstrate that we outperform NeRF in terms of reconstruction quality while performing on par with IDR without requiring masks.
arXiv Detail & Related papers (2021-04-20T15:59:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.