Real-time Non-line-of-sight Imaging with Two-step Deep Remapping
- URL: http://arxiv.org/abs/2101.10492v1
- Date: Tue, 26 Jan 2021 00:08:54 GMT
- Title: Real-time Non-line-of-sight Imaging with Two-step Deep Remapping
- Authors: Dayu Zhu, Wenshan Cai
- Abstract summary: Non-line-of-sight (NLOS) imaging takes the indirect light into account.
Most solutions employ a transient scanning process, followed by a back-projection based algorithm to reconstruct the NLOS scenes.
Here we propose a new NLOS solution to address the above defects, with innovations on both detection equipment and reconstruction algorithm.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conventional imaging only records the photons directly sent from the object
to the detector, while non-line-of-sight (NLOS) imaging takes the indirect
light into account. To explore the NLOS surroundings, most NLOS solutions
employ a transient scanning process, followed by a back-projection based
algorithm to reconstruct the NLOS scenes. However, the transient detection
requires sophisticated apparatus, with long scanning time and low robustness to
ambient environment, and the reconstruction algorithms typically cost tens of
minutes with high demand on memory and computational resources. Here we propose
a new NLOS solution to address the above defects, with innovations on both
detection equipment and reconstruction algorithm. We apply inexpensive
commercial Lidar for detection, with much higher scanning speed and better
compatibility to real-world imaging tasks. Our reconstruction framework is deep
learning based, consisting of a variational autoencoder and a compression
neural network. The generative feature and the two-step reconstruction strategy
of the framework guarantee high fidelity of NLOS imaging. The overall detection
and reconstruction process allows for real-time responses, with
state-of-the-art reconstruction performance. We have experimentally tested the
proposed solution on both a synthetic dataset and real objects, and further
demonstrated our method to be applicable for full-color NLOS imaging.
Related papers
- A Novel end-to-end Framework for Occluded Pixel Reconstruction with
Spatio-temporal Features for Improved Person Re-identification [0.842885453087587]
Person re-identification is vital for monitoring and tracking crowd movement to enhance public security.
In this work, we propose a plausible solution by developing effective occlusion detection and reconstruction framework for RGB images/videos consisting of Deep Neural Networks.
Specifically, a CNN-based occlusion detection model classifies individual input frames, followed by a Conv-LSTM and Autoencoder to reconstruct the occluded pixels corresponding to the occluded frames for sequential (video) and non-sequential (image) data.
arXiv Detail & Related papers (2023-04-16T08:14:29Z) - Exploring Resolution and Degradation Clues as Self-supervised Signal for
Low Quality Object Detection [77.3530907443279]
We propose a novel self-supervised framework to detect objects in degraded low resolution images.
Our methods has achieved superior performance compared with existing methods when facing variant degradation situations.
arXiv Detail & Related papers (2022-08-05T09:36:13Z) - Neural 3D Reconstruction in the Wild [86.6264706256377]
We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections.
We present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes.
arXiv Detail & Related papers (2022-05-25T17:59:53Z) - Physics to the Rescue: Deep Non-line-of-sight Reconstruction for
High-speed Imaging [13.271762773872476]
We present a novel deep model that incorporates the complementary physics priors of wave propagation and volume rendering into a neural network for high-quality and robust NLOS reconstruction.
Our method outperforms prior physics and learning based approaches on both synthetic and real measurements.
arXiv Detail & Related papers (2022-05-03T02:47:02Z) - Spatially-Adaptive Image Restoration using Distortion-Guided Networks [51.89245800461537]
We present a learning-based solution for restoring images suffering from spatially-varying degradations.
We propose SPAIR, a network design that harnesses distortion-localization information and dynamically adjusts to difficult regions in the image.
arXiv Detail & Related papers (2021-08-19T11:02:25Z) - SPI-GAN: Towards Single-Pixel Imaging through Generative Adversarial
Network [6.722629246312285]
We propose a generative adversarial network-based reconstruction framework for single-pixel imaging, referred to as SPI-GAN.
Our method can reconstruct images with 17.92 dB PSNR and 0.487 SSIM, even if the sampling ratio drops to 5%.
arXiv Detail & Related papers (2021-07-03T03:06:09Z) - A parameter refinement method for Ptychography based on Deep Learning
concepts [55.41644538483948]
coarse parametrisation in propagation distance, position errors and partial coherence frequently menaces the experiment viability.
A modern Deep Learning framework is used to correct autonomously the setup incoherences, thus improving the quality of a ptychography reconstruction.
We tested our system on both synthetic datasets and also on real data acquired at the TwinMic beamline of the Elettra synchrotron facility.
arXiv Detail & Related papers (2021-05-18T10:15:17Z) - Adaptive Gradient Balancing for UndersampledMRI Reconstruction and
Image-to-Image Translation [60.663499381212425]
We enhance the image quality by using a Wasserstein Generative Adversarial Network combined with a novel Adaptive Gradient Balancing technique.
In MRI, our method minimizes artifacts, while maintaining a high-quality reconstruction that produces sharper images than other techniques.
arXiv Detail & Related papers (2021-04-05T13:05:22Z) - LSHR-Net: a hardware-friendly solution for high-resolution computational
imaging using a mixed-weights neural network [5.475867050068397]
We propose a novel hardware-friendly solution based on mixed-weights neural networks for computational imaging.
In particular, learned binary-weight sensing patterns are tailored to the sampling device.
Our method has been validated on benchmark datasets and achieved the state of the art reconstruction accuracy.
arXiv Detail & Related papers (2020-04-27T20:59:51Z) - u-net CNN based fourier ptychography [5.46367622374939]
We propose a new retrieval algorithm that is based on convolutional neural networks.
Experiments demonstrate that our model achieves better reconstruction results and is more robust under system aberrations.
arXiv Detail & Related papers (2020-03-16T22:48:44Z) - Unlimited Resolution Image Generation with R2D2-GANs [69.90258455164513]
We present a novel simulation technique for generating high quality images of any predefined resolution.
This method can be used to synthesize sonar scans of size equivalent to those collected during a full-length mission.
The data produced is continuous, realistically-looking, and can also be generated at least two times faster than the real speed of acquisition.
arXiv Detail & Related papers (2020-03-02T17:49:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.