A Simple Framework for 3D Lensless Imaging with Programmable Masks
- URL: http://arxiv.org/abs/2108.07966v1
- Date: Wed, 18 Aug 2021 04:05:33 GMT
- Title: A Simple Framework for 3D Lensless Imaging with Programmable Masks
- Authors: Yucheng Zheng, Yi Hua, Aswin C. Sankaranarayanan, M. Salman Asif
- Abstract summary: We propose a lensless imaging system that captures a small number of measurements using different patterns on a programmable mask.
First, we present a fast recovery algorithm to recover textures on a fixed number of depth planes in the scene.
Second, we consider the mask design problem, for programmable lensless cameras, and provide a design template for optimizing the mask patterns.
Third, we use a refinement network as a post-processing step to identify and remove artifacts in the reconstruction.
- Score: 37.35255907261072
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lensless cameras provide a framework to build thin imaging systems by
replacing the lens in a conventional camera with an amplitude or phase mask
near the sensor. Existing methods for lensless imaging can recover the depth
and intensity of the scene, but they require solving computationally-expensive
inverse problems. Furthermore, existing methods struggle to recover dense
scenes with large depth variations. In this paper, we propose a lensless
imaging system that captures a small number of measurements using different
patterns on a programmable mask. In this context, we make three contributions.
First, we present a fast recovery algorithm to recover textures on a fixed
number of depth planes in the scene. Second, we consider the mask design
problem, for programmable lensless cameras, and provide a design template for
optimizing the mask patterns with the goal of improving depth estimation.
Third, we use a refinement network as a post-processing step to identify and
remove artifacts in the reconstruction. These modifications are evaluated
extensively with experimental results on a lensless camera prototype to
showcase the performance benefits of the optimized masks and recovery
algorithms over the state of the art.
Related papers
- GANESH: Generalizable NeRF for Lensless Imaging [12.985055542373791]
We introduce GANESH, a novel framework designed to enable simultaneous refinement and novel view synthesis from lensless images.
Unlike existing methods that require scene-specific training, our approach supports on-the-fly inference without retraining on each scene.
To facilitate research in this area, we also present the first multi-view lensless dataset, LenslessScenes.
arXiv Detail & Related papers (2024-11-07T15:47:07Z) - MM-3DScene: 3D Scene Understanding by Customizing Masked Modeling with
Informative-Preserved Reconstruction and Self-Distilled Consistency [120.9499803967496]
We propose a novel informative-preserved reconstruction, which explores local statistics to discover and preserve the representative structured points.
Our method can concentrate on modeling regional geometry and enjoy less ambiguity for masked reconstruction.
By combining informative-preserved reconstruction on masked areas and consistency self-distillation from unmasked areas, a unified framework called MM-3DScene is yielded.
arXiv Detail & Related papers (2022-12-20T01:53:40Z) - Layered Depth Refinement with Mask Guidance [61.10654666344419]
We formulate a novel problem of mask-guided depth refinement that utilizes a generic mask to refine the depth prediction of SIDE models.
Our framework performs layered refinement and inpainting/outpainting, decomposing the depth map into two separate layers signified by the mask and the inverse mask.
We empirically show that our method is robust to different types of masks and initial depth predictions, accurately refining depth values in inner and outer mask boundary regions.
arXiv Detail & Related papers (2022-06-07T06:42:44Z) - Coded Illumination for Improved Lensless Imaging [22.992552346745523]
We propose to use coded illumination to improve the quality of images reconstructed with lensless cameras.
In our imaging model, the scene/object is illuminated by multiple coded illumination patterns as the lensless camera records sensor measurements.
We propose a fast and low-complexity recovery algorithm that exploits the separability and block-diagonal structure in our system.
arXiv Detail & Related papers (2021-11-25T01:22:40Z) - Towards Non-Line-of-Sight Photography [48.491977359971855]
Non-line-of-sight (NLOS) imaging is based on capturing the multi-bounce indirect reflections from the hidden objects.
Active NLOS imaging systems rely on the capture of the time of flight of light through the scene.
We propose a new problem formulation, called NLOS photography, to specifically address this deficiency.
arXiv Detail & Related papers (2021-09-16T08:07:13Z) - CodedStereo: Learned Phase Masks for Large Depth-of-field Stereo [24.193656749401075]
Conventional stereo suffers from a fundamental trade-off between imaging volume and signal-to-noise ratio.
We propose a novel end-to-end learning-based technique to overcome this limitation.
We show a 6x increase in volume that can be imaged in simulation.
arXiv Detail & Related papers (2021-04-09T23:44:52Z) - Universal and Flexible Optical Aberration Correction Using Deep-Prior
Based Deconvolution [51.274657266928315]
We propose a PSF aware plug-and-play deep network, which takes the aberrant image and PSF map as input and produces the latent high quality version via incorporating lens-specific deep priors.
Specifically, we pre-train a base model from a set of diverse lenses and then adapt it to a given lens by quickly refining the parameters.
arXiv Detail & Related papers (2021-04-07T12:00:38Z) - FlatNet: Towards Photorealistic Scene Reconstruction from Lensless
Measurements [31.353395064815892]
We propose a non-iterative deep learning based reconstruction approach that results in orders of magnitude improvement in image quality for lensless reconstructions.
Our approach, called $textitFlatNet$, lays down a framework for reconstructing high-quality photorealistic images from mask-based lensless cameras.
arXiv Detail & Related papers (2020-10-29T09:20:22Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.