Seeing Around Corners with Edge-Resolved Transient Imaging
- URL: http://arxiv.org/abs/2002.07118v1
- Date: Mon, 17 Feb 2020 18:33:48 GMT
- Title: Seeing Around Corners with Edge-Resolved Transient Imaging
- Authors: Joshua Rapp, Charles Saunders, Juli\'an Tachella, John Murray-Bruce,
Yoann Altmann, Jean-Yves Tourneret, Stephen McLaughlin, Robin M. A. Dawson,
Franco N. C. Wong, Vivek K Goyal
- Abstract summary: Non-line-of-sight (NLOS) imaging seeks to form images of objects outside the field of view.
diffuse reflections scatter light in all directions, resulting in weak signals and a loss of directional information.
We propose a method for seeing around corners that derives angular resolution from vertical edges and longitudinal resolution from the temporal response to a pulsed light source.
- Score: 15.44831979669091
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Non-line-of-sight (NLOS) imaging is a rapidly growing field seeking to form
images of objects outside the field of view, with potential applications in
search and rescue, reconnaissance, and even medical imaging. The critical
challenge of NLOS imaging is that diffuse reflections scatter light in all
directions, resulting in weak signals and a loss of directional information. To
address this problem, we propose a method for seeing around corners that
derives angular resolution from vertical edges and longitudinal resolution from
the temporal response to a pulsed light source. We introduce an acquisition
strategy, scene response model, and reconstruction algorithm that enable the
formation of 2.5-dimensional representations -- a plan view plus heights -- and
a 180$^{\circ}$ field of view (FOV) for large-scale scenes. Our experiments
demonstrate accurate reconstructions of hidden rooms up to 3 meters in each
dimension.
Related papers
- A Survey of Representation Learning, Optimization Strategies, and Applications for Omnidirectional Vision [5.208806195877025]
In recent years, the availability of customer-level 360 cameras has made omnidirectional vision more popular.
The advance of deep learning (DL) has significantly sparked its research and applications.
This paper presents a systematic and comprehensive review and analysis of the recent progress of DL for omnidirectional vision.
arXiv Detail & Related papers (2025-02-11T08:05:11Z) - GEOcc: Geometrically Enhanced 3D Occupancy Network with Implicit-Explicit Depth Fusion and Contextual Self-Supervision [49.839374549646884]
This paper presents GEOcc, a Geometric-Enhanced Occupancy network tailored for vision-only surround-view perception.
Our approach achieves State-Of-The-Art performance on the Occ3D-nuScenes dataset with the least image resolution needed and the most weightless image backbone.
arXiv Detail & Related papers (2024-05-17T07:31:20Z) - Phase Guided Light Field for Spatial-Depth High Resolution 3D Imaging [36.208109063579066]
On 3D imaging, light field cameras typically are of single shot, and they heavily suffer from low spatial resolution and depth accuracy.
We propose a phase guided light field algorithm to significantly improve both the spatial and depth resolutions for off-the-shelf light field cameras.
arXiv Detail & Related papers (2023-11-17T15:08:15Z) - Multi-Projection Fusion and Refinement Network for Salient Object
Detection in 360{\deg} Omnidirectional Image [141.10227079090419]
We propose a Multi-Projection Fusion and Refinement Network (MPFR-Net) to detect the salient objects in 360deg omnidirectional image.
MPFR-Net uses the equirectangular projection image and four corresponding cube-unfolding images as inputs.
Experimental results on two omnidirectional datasets demonstrate that the proposed approach outperforms the state-of-the-art methods both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-12-23T14:50:40Z) - Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo [103.08512487830669]
We present a modern solution to the multi-view photometric stereo problem (MVPS)
We procure the surface orientation using a photometric stereo (PS) image formation model and blend it with a multi-view neural radiance field representation to recover the object's surface geometry.
Our method performs neural rendering of multi-view images while utilizing surface normals estimated by a deep photometric stereo network.
arXiv Detail & Related papers (2021-10-11T20:20:03Z) - Towards Non-Line-of-Sight Photography [48.491977359971855]
Non-line-of-sight (NLOS) imaging is based on capturing the multi-bounce indirect reflections from the hidden objects.
Active NLOS imaging systems rely on the capture of the time of flight of light through the scene.
We propose a new problem formulation, called NLOS photography, to specifically address this deficiency.
arXiv Detail & Related papers (2021-09-16T08:07:13Z) - A Parallel Down-Up Fusion Network for Salient Object Detection in
Optical Remote Sensing Images [82.87122287748791]
We propose a novel Parallel Down-up Fusion network (PDF-Net) for salient object detection in optical remote sensing images (RSIs)
It takes full advantage of the in-path low- and high-level features and cross-path multi-resolution features to distinguish diversely scaled salient objects and suppress the cluttered backgrounds.
Experiments on the ORSSD dataset demonstrate that the proposed network is superior to the state-of-the-art approaches both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-10-02T05:27:57Z) - Efficient Non-Line-of-Sight Imaging from Transient Sinograms [36.154873075911404]
Non-line-of-sight (NLOS) imaging techniques use light that diffusely reflects off of visible surfaces (e.g., walls) to see around corners.
One approach involves using pulsed lasers and ultrafast sensors to measure the travel time of multiply scattered light.
We propose a more efficient form of NLOS scanning that reduces both acquisition times and computational requirements.
arXiv Detail & Related papers (2020-08-06T17:50:50Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.