Coded Illumination for Improved Lensless Imaging
- URL: http://arxiv.org/abs/2111.12862v1
- Date: Thu, 25 Nov 2021 01:22:40 GMT
- Title: Coded Illumination for Improved Lensless Imaging
- Authors: Yucheng Zheng and M. Salman Asif
- Abstract summary: We propose to use coded illumination to improve the quality of images reconstructed with lensless cameras.
In our imaging model, the scene/object is illuminated by multiple coded illumination patterns as the lensless camera records sensor measurements.
We propose a fast and low-complexity recovery algorithm that exploits the separability and block-diagonal structure in our system.
- Score: 22.992552346745523
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mask-based lensless cameras can be flat, thin and light-weight, which makes
them suitable for novel designs of computational imaging systems with large
surface areas and arbitrary shapes. Despite recent progress in lensless
cameras, the quality of images recovered from the lensless cameras is often
poor due to the ill-conditioning of the underlying measurement system. In this
paper, we propose to use coded illumination to improve the quality of images
reconstructed with lensless cameras. In our imaging model, the scene/object is
illuminated by multiple coded illumination patterns as the lensless camera
records sensor measurements. We designed and tested a number of illumination
patterns and observed that shifting dots (and related orthogonal) patterns
provide the best overall performance. We propose a fast and low-complexity
recovery algorithm that exploits the separability and block-diagonal structure
in our system. We present simulation results and hardware experiment results to
demonstrate that our proposed method can significantly improve the
reconstruction quality.
Related papers
- Optical Aberration Correction in Postprocessing using Imaging Simulation [17.331939025195478]
The popularity of mobile photography continues to grow.
Recent cameras have shifted some of these correction tasks from optical design to postprocessing systems.
We propose a practical method for recovering the degradation caused by optical aberrations.
arXiv Detail & Related papers (2023-05-10T03:20:39Z) - Neural Lens Modeling [50.57409162437732]
NeuroLens is a neural lens model for distortion and vignetting that can be used for point projection and ray casting.
It can be used to perform pre-capture calibration using classical calibration targets, and can later be used to perform calibration or refinement during 3D reconstruction.
The model generalizes across many lens types and is trivial to integrate into existing 3D reconstruction and rendering systems.
arXiv Detail & Related papers (2023-04-10T20:09:17Z) - High Dynamic Range and Super-Resolution from Raw Image Bursts [52.341483902624006]
This paper introduces the first approach to reconstruct high-resolution, high-dynamic range color images from raw photographic bursts captured by a handheld camera with exposure bracketing.
The proposed algorithm is fast, with low memory requirements compared to state-of-the-art learning-based approaches to image restoration.
Experiments demonstrate its excellent performance with super-resolution factors of up to $times 4$ on real photographs taken in the wild with hand-held cameras.
arXiv Detail & Related papers (2022-07-29T13:31:28Z) - Learning rich optical embeddings for privacy-preserving lensless image
classification [17.169529483306103]
We exploit the unique multiplexing property of casting the optics as an encoder that produces learned embeddings directly at the camera sensor.
We do so in the context of image classification, where we jointly optimize the encoder's parameters and those of an image classifier in an end-to-end fashion.
Our experiments show that jointly learning the lensless optical encoder and the digital processing allows for lower resolution embeddings at the sensor, and hence better privacy as it is much harder to recover meaningful images from these measurements.
arXiv Detail & Related papers (2022-06-03T07:38:09Z) - Neural 3D Reconstruction in the Wild [86.6264706256377]
We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections.
We present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes.
arXiv Detail & Related papers (2022-05-25T17:59:53Z) - Unrolled Primal-Dual Networks for Lensless Cameras [0.45880283710344055]
We show that learning a supervised primal-dual reconstruction method results in image quality matching state of the art in the literature.
This improvement stems from our finding that embedding learnable forward and adjoint models in a learned primal-dual optimization framework can even improve the quality of reconstructed images.
arXiv Detail & Related papers (2022-03-08T19:21:39Z) - A Simple Framework for 3D Lensless Imaging with Programmable Masks [37.35255907261072]
We propose a lensless imaging system that captures a small number of measurements using different patterns on a programmable mask.
First, we present a fast recovery algorithm to recover textures on a fixed number of depth planes in the scene.
Second, we consider the mask design problem, for programmable lensless cameras, and provide a design template for optimizing the mask patterns.
Third, we use a refinement network as a post-processing step to identify and remove artifacts in the reconstruction.
arXiv Detail & Related papers (2021-08-18T04:05:33Z) - How to Calibrate Your Event Camera [58.80418612800161]
We propose a generic event camera calibration framework using image reconstruction.
We show that neural-network-based image reconstruction is well suited for the task of intrinsic and extrinsic calibration of event cameras.
arXiv Detail & Related papers (2021-05-26T07:06:58Z) - Universal and Flexible Optical Aberration Correction Using Deep-Prior
Based Deconvolution [51.274657266928315]
We propose a PSF aware plug-and-play deep network, which takes the aberrant image and PSF map as input and produces the latent high quality version via incorporating lens-specific deep priors.
Specifically, we pre-train a base model from a set of diverse lenses and then adapt it to a given lens by quickly refining the parameters.
arXiv Detail & Related papers (2021-04-07T12:00:38Z) - FlatNet: Towards Photorealistic Scene Reconstruction from Lensless
Measurements [31.353395064815892]
We propose a non-iterative deep learning based reconstruction approach that results in orders of magnitude improvement in image quality for lensless reconstructions.
Our approach, called $textitFlatNet$, lays down a framework for reconstructing high-quality photorealistic images from mask-based lensless cameras.
arXiv Detail & Related papers (2020-10-29T09:20:22Z) - Redesigning SLAM for Arbitrary Multi-Camera Systems [51.81798192085111]
Adding more cameras to SLAM systems improves robustness and accuracy but complicates the design of the visual front-end significantly.
In this work, we aim at an adaptive SLAM system that works for arbitrary multi-camera setups.
We adapt a state-of-the-art visual-inertial odometry with these modifications, and experimental results show that the modified pipeline can adapt to a wide range of camera setups.
arXiv Detail & Related papers (2020-03-04T11:44:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.