Virtual light transport matrices for non-line-of-sight imaging
- URL: http://arxiv.org/abs/2103.12622v1
- Date: Tue, 23 Mar 2021 15:17:45 GMT
- Title: Virtual light transport matrices for non-line-of-sight imaging
- Authors: Julio Marco, Adrian Jarabo, Ji Hyun Nam, Xiaochun Liu, Miguel \'Angel
Cosculluela, Andreas Velten, Diego Gutierrez
- Abstract summary: The light transport matrix (LTM) is an instrumental tool in line-of-sight (LOS) imaging, describing how light interacts with the scene.
We introduce a framework to estimate the LTM of non-line-of-sight (NLOS) scenarios, coupling recent virtual forward light propagation models for NLOS imaging with the LOS light transport equation.
- Score: 19.19505452561486
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The light transport matrix (LTM) is an instrumental tool in line-of-sight
(LOS) imaging, describing how light interacts with the scene and enabling
applications such as relighting or separation of illumination components. We
introduce a framework to estimate the LTM of non-line-of-sight (NLOS)
scenarios, coupling recent virtual forward light propagation models for NLOS
imaging with the LOS light transport equation. We design computational
projector-camera setups, and use these virtual imaging systems to estimate the
transport matrix of hidden scenes. We introduce the specific illumination
functions to compute the different elements of the matrix, overcoming the
challenging wide-aperture conditions of NLOS setups. Our NLOS light transport
matrix allows us to (re)illuminate specific locations of a hidden scene, and
separate direct, first-order indirect, and higher-order indirect illumination
of complex cluttered hidden scenes, similar to existing LOS techniques.
Related papers
- Iterating the Transient Light Transport Matrix for Non-Line-of-Sight Imaging [4.563825593952498]
Time-resolved Non-line-of-sight (NLOS) imaging employs an active system that measures part of the Transient Light Transport Matrix (TLTM)
In this work, we demonstrate that the full TLTM can be processed with efficient algorithms to focus and detect our illumination in different parts of the hidden scene.
arXiv Detail & Related papers (2024-12-13T17:35:42Z) - GS-Phong: Meta-Learned 3D Gaussians for Relightable Novel View Synthesis [63.5925701087252]
We propose a novel method for representing a scene illuminated by a point light using a set of relightable 3D Gaussian points.
Inspired by the Blinn-Phong model, our approach decomposes the scene into ambient, diffuse, and specular components.
To facilitate the decomposition of geometric information independent of lighting conditions, we introduce a novel bilevel optimization-based meta-learning framework.
arXiv Detail & Related papers (2024-05-31T13:48:54Z) - Passive Non-Line-of-Sight Imaging with Light Transport Modulation [45.992851199035336]
We propose NLOS-LTM, a novel passive NLOS imaging method that effectively handles multiple light transport conditions with a single network.
We achieve this by inferring a latent light transport representation from the projection image and using this representation to modulate the network that reconstructs the hidden image from the projection image.
Experiments on a large-scale passive NLOS dataset demonstrate the superiority of the proposed method.
arXiv Detail & Related papers (2023-12-26T11:49:23Z) - Self-Calibrating, Fully Differentiable NLOS Inverse Rendering [15.624750787186803]
Time-resolved non-line-of-sight (NLOS) imaging methods reconstruct hidden scenes by inverting the optical paths of indirect illumination measured at visible relay surfaces.
We introduce a fully-differentiable end-to-end NLOS inverse rendering pipeline that self-calibrates the imaging parameters during the reconstruction of hidden scenes.
We demonstrate the robustness of our method to consistently reconstruct geometry and albedo, even under significant noise levels.
arXiv Detail & Related papers (2023-09-21T13:15:54Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - Free-viewpoint Indoor Neural Relighting from Multi-view Stereo [5.306819482496464]
We introduce a neural relighting algorithm for captured indoors scenes, that allows interactive free-viewpoint navigation.
Our method allows illumination to be changed synthetically, while coherently rendering cast shadows and complex glossy materials.
arXiv Detail & Related papers (2021-06-24T20:09:40Z) - Sparse Needlets for Lighting Estimation with Spherical Transport Loss [89.52531416604774]
NeedleLight is a new lighting estimation model that represents illumination with needlets and allows lighting estimation in both frequency domain and spatial domain jointly.
Extensive experiments show that NeedleLight achieves superior lighting estimation consistently across multiple evaluation metrics as compared with state-of-the-art methods.
arXiv Detail & Related papers (2021-06-24T15:19:42Z) - Neural Ray-Tracing: Learning Surfaces and Reflectance for Relighting and
View Synthesis [28.356700318603565]
We explicitly model the light transport between scene surfaces and we rely on traditional integration schemes and the rendering equation to reconstruct a scene.
By learning decomposed transport with surface representations established in conventional rendering methods, the method naturally facilitates editing shape, reflectance, lighting and scene composition.
We validate the proposed approach for scene editing, relighting and reflectance estimation learned from synthetic and captured views on a subset of NeRV's datasets.
arXiv Detail & Related papers (2021-04-28T03:47:48Z) - Light Stage Super-Resolution: Continuous High-Frequency Relighting [58.09243542908402]
We propose a learning-based solution for the "super-resolution" of scans of human faces taken from a light stage.
Our method aggregates the captured images corresponding to neighboring lights in the stage, and uses a neural network to synthesize a rendering of the face.
Our learned model is able to produce renderings for arbitrary light directions that exhibit realistic shadows and specular highlights.
arXiv Detail & Related papers (2020-10-17T23:40:43Z) - Neural Light Transport for Relighting and View Synthesis [70.39907425114302]
Light transport (LT) of a scene describes how it appears under different lighting and viewing directions.
We propose a semi-parametric approach to learn a neural representation of LT embedded in a texture atlas of known geometric properties.
We show how to fuse previously seen observations of illuminants and views to synthesize a new image of the same scene under a desired lighting condition.
arXiv Detail & Related papers (2020-08-09T20:13:15Z) - Scene relighting with illumination estimation in the latent space on an
encoder-decoder scheme [68.8204255655161]
In this report we present methods that we tried to achieve that goal.
Our models are trained on a rendered dataset of artificial locations with varied scene content, light source location and color temperature.
With this dataset, we used a network with illumination estimation component aiming to infer and replace light conditions in the latent space representation of the concerned scenes.
arXiv Detail & Related papers (2020-06-03T15:25:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.