Virtual Mirrors: Non-Line-of-Sight Imaging Beyond the Third Bounce
- URL: http://arxiv.org/abs/2307.14341v1
- Date: Wed, 26 Jul 2023 17:59:20 GMT
- Title: Virtual Mirrors: Non-Line-of-Sight Imaging Beyond the Third Bounce
- Authors: Diego Royo and Talha Sultan and Adolfo Mu\~noz and Khadijeh
Masumnia-Bisheh and Eric Brandt and Diego Gutierrez and Andreas Velten and
Julio Marco
- Abstract summary: Non-line-of-sight (NLOS) imaging methods are capable of reconstructing complex scenes that are not visible to an observer using indirect illumination.
We make the key observation that planar diffuse surfaces behave specularly at wavelengths used in the computational wave-based NLOS imaging domain.
We leverage this observation to expand the capabilities of NLOS imaging using illumination beyond the third bounce.
- Score: 11.767522056116842
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Non-line-of-sight (NLOS) imaging methods are capable of reconstructing
complex scenes that are not visible to an observer using indirect illumination.
However, they assume only third-bounce illumination, so they are currently
limited to single-corner configurations, and present limited visibility when
imaging surfaces at certain orientations. To reason about and tackle these
limitations, we make the key observation that planar diffuse surfaces behave
specularly at wavelengths used in the computational wave-based NLOS imaging
domain. We call such surfaces virtual mirrors. We leverage this observation to
expand the capabilities of NLOS imaging using illumination beyond the third
bounce, addressing two problems: imaging single-corner objects at limited
visibility angles, and imaging objects hidden behind two corners. To image
objects at limited visibility angles, we first analyze the reflections of the
known illuminated point on surfaces of the scene as an estimator of the
position and orientation of objects with limited visibility. We then image
those limited visibility objects by computationally building secondary
apertures at other surfaces that observe the target object from a direct
visibility perspective. Beyond single-corner NLOS imaging, we exploit the
specular behavior of virtual mirrors to image objects hidden behind a second
corner by imaging the space behind such virtual mirrors, where the mirror image
of objects hidden around two corners is formed. No specular surfaces were
involved in the making of this paper.
Related papers
- VFMM3D: Releasing the Potential of Image by Vision Foundation Model for Monocular 3D Object Detection [80.62052650370416]
monocular 3D object detection holds significant importance across various applications, including autonomous driving and robotics.
In this paper, we present VFMM3D, an innovative framework that leverages the capabilities of Vision Foundation Models (VFMs) to accurately transform single-view images into LiDAR point cloud representations.
arXiv Detail & Related papers (2024-04-15T03:12:12Z) - Tabletop Transparent Scene Reconstruction via Epipolar-Guided Optical
Flow with Monocular Depth Completion Prior [14.049778178534588]
We introduce a two-stage pipeline for reconstructing transparent objects tailored for mobile platforms.
Epipolar-guided Optical Flow (EOF) to fuse several single-view shape priors to a cross-view consistent 3D reconstruction.
Our pipeline significantly outperforms baseline methods in 3D reconstruction quality.
arXiv Detail & Related papers (2023-10-15T21:30:06Z) - ORCa: Glossy Objects as Radiance Field Cameras [23.75324754684283]
We convert glossy objects with unknown geometry into radiance-field cameras to image the world from the object's perspective.
We show that recovering the environment radiance fields enables depth and radiance estimation from the object to its surroundings.
Our method is trained end-to-end on multi-view images of the object and jointly estimates object geometry, diffuse radiance, and the 5D environment radiance field.
arXiv Detail & Related papers (2022-12-08T19:32:08Z) - GAN2X: Non-Lambertian Inverse Rendering of Image GANs [85.76426471872855]
We present GAN2X, a new method for unsupervised inverse rendering that only uses unpaired images for training.
Unlike previous Shape-from-GAN approaches that mainly focus on 3D shapes, we take the first attempt to also recover non-Lambertian material properties by exploiting the pseudo paired data generated by a GAN.
Experiments demonstrate that GAN2X can accurately decompose 2D images to 3D shape, albedo, and specular properties for different object categories, and achieves the state-of-the-art performance for unsupervised single-view 3D face reconstruction.
arXiv Detail & Related papers (2022-06-18T16:58:49Z) - Multifocal Stereoscopic Projection Mapping [24.101349988126692]
Current stereoscopic PM technology only satisfies binocular cues and is not capable of providing correct focus cues.
We propose a multifocal approach to mitigate a vergence--accommodation conflict (VAC) in stereoscopic PM.
A 3D CG object is projected from a synchronized high-speed projector only when the virtual image of the projected imagery is located at a desired distance.
arXiv Detail & Related papers (2021-10-08T06:13:10Z) - Towards Non-Line-of-Sight Photography [48.491977359971855]
Non-line-of-sight (NLOS) imaging is based on capturing the multi-bounce indirect reflections from the hidden objects.
Active NLOS imaging systems rely on the capture of the time of flight of light through the scene.
We propose a new problem formulation, called NLOS photography, to specifically address this deficiency.
arXiv Detail & Related papers (2021-09-16T08:07:13Z) - Refractive Light-Field Features for Curved Transparent Objects in
Structure from Motion [10.380414189465345]
We propose a novel image feature for light fields that detects and describes the patterns of light refracted through curved transparent objects.
We demonstrate improved structure-from-motion performance in challenging scenes containing refractive objects.
Our method is a critical step towards allowing robots to operate around refractive objects.
arXiv Detail & Related papers (2021-03-29T05:55:32Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z) - Single View Metrology in the Wild [94.7005246862618]
We present a novel approach to single view metrology that can recover the absolute scale of a scene represented by 3D heights of objects or camera height above the ground.
Our method relies on data-driven priors learned by a deep network specifically designed to imbibe weakly supervised constraints from the interplay of the unknown camera with 3D entities such as object heights.
We demonstrate state-of-the-art qualitative and quantitative results on several datasets as well as applications including virtual object insertion.
arXiv Detail & Related papers (2020-07-18T22:31:33Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.