Self-Calibrating, Fully Differentiable NLOS Inverse Rendering
- URL: http://arxiv.org/abs/2309.12047v2
- Date: Tue, 26 Sep 2023 03:36:38 GMT
- Title: Self-Calibrating, Fully Differentiable NLOS Inverse Rendering
- Authors: Kiseok Choi, Inchul Kim, Dongyoung Choi, Julio Marco, Diego Gutierrez,
Min H. Kim
- Abstract summary: Time-resolved non-line-of-sight (NLOS) imaging methods reconstruct hidden scenes by inverting the optical paths of indirect illumination measured at visible relay surfaces.
We introduce a fully-differentiable end-to-end NLOS inverse rendering pipeline that self-calibrates the imaging parameters during the reconstruction of hidden scenes.
We demonstrate the robustness of our method to consistently reconstruct geometry and albedo, even under significant noise levels.
- Score: 15.624750787186803
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing time-resolved non-line-of-sight (NLOS) imaging methods reconstruct
hidden scenes by inverting the optical paths of indirect illumination measured
at visible relay surfaces. These methods are prone to reconstruction artifacts
due to inversion ambiguities and capture noise, which are typically mitigated
through the manual selection of filtering functions and parameters. We
introduce a fully-differentiable end-to-end NLOS inverse rendering pipeline
that self-calibrates the imaging parameters during the reconstruction of hidden
scenes, using as input only the measured illumination while working both in the
time and frequency domains. Our pipeline extracts a geometric representation of
the hidden scene from NLOS volumetric intensities and estimates the
time-resolved illumination at the relay wall produced by such geometric
information using differentiable transient rendering. We then use gradient
descent to optimize imaging parameters by minimizing the error between our
simulated time-resolved illumination and the measured illumination. Our
end-to-end differentiable pipeline couples diffraction-based volumetric NLOS
reconstruction with path-space light transport and a simple ray marching
technique to extract detailed, dense sets of surface points and normals of
hidden scenes. We demonstrate the robustness of our method to consistently
reconstruct geometry and albedo, even under significant noise levels.
Related papers
- Generalizable Non-Line-of-Sight Imaging with Learnable Physical Priors [52.195637608631955]
Non-line-of-sight (NLOS) imaging has attracted increasing attention due to its potential applications.
Existing NLOS reconstruction approaches are constrained by the reliance on empirical physical priors.
We introduce a novel learning-based solution, comprising two key designs: Learnable Path Compensation (LPC) and Adaptive Phasor Field (APF)
arXiv Detail & Related papers (2024-09-21T04:39:45Z) - MIRReS: Multi-bounce Inverse Rendering using Reservoir Sampling [17.435649250309904]
We present MIRReS, a novel two-stage inverse rendering framework.
Our method extracts an explicit geometry (triangular mesh) in stage one, and introduces a more realistic physically-based inverse rendering model.
Our method effectively estimates indirect illumination, including self-shadowing and internal reflections.
arXiv Detail & Related papers (2024-06-24T07:00:57Z) - Neural Free-Viewpoint Relighting for Glossy Indirect Illumination [44.32630651762033]
We show a hybrid neural-wavelet PRT solution to high-frequency indirect illumination, including glossy reflection, for relighting with changing view.
We demonstrate real-time rendering of challenging scenes involving view-dependent reflections and even caustics.
arXiv Detail & Related papers (2023-07-12T17:56:09Z) - Inverse Global Illumination using a Neural Radiometric Prior [26.29610954064107]
Inverse rendering methods that account for global illumination are becoming more popular.
This paper proposes a radiometric prior as a simple alternative to building complete path integrals in a traditional differentiable path tracer.
arXiv Detail & Related papers (2023-05-03T15:36:39Z) - NeFII: Inverse Rendering for Reflectance Decomposition with Near-Field
Indirect Illumination [48.42173911185454]
Inverse rendering methods aim to estimate geometry, materials and illumination from multi-view RGB images.
We propose an end-to-end inverse rendering pipeline that decomposes materials and illumination from multi-view images.
arXiv Detail & Related papers (2023-03-29T12:05:19Z) - PhySG: Inverse Rendering with Spherical Gaussians for Physics-based
Material Editing and Relighting [60.75436852495868]
We present PhySG, an inverse rendering pipeline that reconstructs geometry, materials, and illumination from scratch from RGB input images.
We demonstrate, with both synthetic and real data, that our reconstructions not only enable rendering of novel viewpoints, but also physics-based appearance editing of materials and illumination.
arXiv Detail & Related papers (2021-04-01T17:59:02Z) - Light Field Reconstruction Using Convolutional Network on EPI and
Extended Applications [78.63280020581662]
A novel convolutional neural network (CNN)-based framework is developed for light field reconstruction from a sparse set of views.
We demonstrate the high performance and robustness of the proposed framework compared with state-of-the-art algorithms.
arXiv Detail & Related papers (2021-03-24T08:16:32Z) - Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo [61.6260594326246]
We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
arXiv Detail & Related papers (2021-03-22T18:06:58Z) - NeRV: Neural Reflectance and Visibility Fields for Relighting and View
Synthesis [45.71507069571216]
We present a method that takes as input a set of images of a scene illuminated by unconstrained known lighting.
This produces a 3D representation that can be rendered from novel viewpoints under arbitrary lighting conditions.
arXiv Detail & Related papers (2020-12-07T18:56:08Z) - Light Stage Super-Resolution: Continuous High-Frequency Relighting [58.09243542908402]
We propose a learning-based solution for the "super-resolution" of scans of human faces taken from a light stage.
Our method aggregates the captured images corresponding to neighboring lights in the stage, and uses a neural network to synthesize a rendering of the face.
Our learned model is able to produce renderings for arbitrary light directions that exhibit realistic shadows and specular highlights.
arXiv Detail & Related papers (2020-10-17T23:40:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.