LenslessPiCam: A Hardware and Software Platform for Lensless
Computational Imaging with a Raspberry Pi
- URL: http://arxiv.org/abs/2206.01430v1
- Date: Fri, 3 Jun 2022 07:39:21 GMT
- Title: LenslessPiCam: A Hardware and Software Platform for Lensless
Computational Imaging with a Raspberry Pi
- Authors: Eric Bezzam, Sepand Kashani, Martin Vetterli, Matthieu Simeoni
- Abstract summary: LenslessPiCam provides a framework to enable researchers, hobbyists, and students to implement and explore lensless imaging.
We provide detailed guides and exercises so that LenslessPiCam can be used as an educational resource, and point to results from our graduate-level signal processing course.
- Score: 14.690546891460235
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Lensless imaging seeks to replace/remove the lens in a conventional imaging
system. The earliest cameras were in fact lensless, relying on long exposure
times to form images on the other end of a small aperture in a darkened
room/container (camera obscura). The introduction of a lens allowed for more
light throughput and therefore shorter exposure times, while retaining sharp
focus. The incorporation of digital sensors readily enabled the use of
computational imaging techniques to post-process and enhance raw images (e.g.
via deblurring, inpainting, denoising, sharpening). Recently, imaging
scientists have started leveraging computational imaging as an integral part of
lensless imaging systems, allowing them to form viewable images from the highly
multiplexed raw measurements of lensless cameras (see [5] and references
therein for a comprehensive treatment of lensless imaging). This represents a
real paradigm shift in camera system design as there is more flexibility to
cater the hardware to the application at hand (e.g. lightweight or flat
designs). This increased flexibility comes however at the price of a more
demanding post-processing of the raw digital recordings and a tighter
integration of sensing and computation, often difficult to achieve in practice
due to inefficient interactions between the various communities of scientists
involved. With LenslessPiCam, we provide an easily accessible hardware and
software framework to enable researchers, hobbyists, and students to implement
and explore practical and computational aspects of lensless imaging. We also
provide detailed guides and exercises so that LenslessPiCam can be used as an
educational resource, and point to results from our graduate-level signal
processing course.
Related papers
- Let There Be Light: Robust Lensless Imaging Under External Illumination With Deep Learning [7.368155086339779]
Lensless cameras relax the design constraints of traditional cameras by shifting image formation from analog optics to digital post-processing.
While new camera designs and applications can be enabled, lensless imaging is very sensitive to unwanted interference (other sources, noise, etc.)
arXiv Detail & Related papers (2024-09-25T09:24:53Z) - Hand Gestures Recognition in Videos Taken with Lensless Camera [4.49422973940462]
This work proposes a deep learning model named Raw3dNet that recognizes hand gestures directly on raw videos captured by a lensless camera.
In addition to conserving computational resources, the reconstruction-free method provides privacy protection.
arXiv Detail & Related papers (2022-10-15T08:52:49Z) - High Dynamic Range and Super-Resolution from Raw Image Bursts [52.341483902624006]
This paper introduces the first approach to reconstruct high-resolution, high-dynamic range color images from raw photographic bursts captured by a handheld camera with exposure bracketing.
The proposed algorithm is fast, with low memory requirements compared to state-of-the-art learning-based approaches to image restoration.
Experiments demonstrate its excellent performance with super-resolution factors of up to $times 4$ on real photographs taken in the wild with hand-held cameras.
arXiv Detail & Related papers (2022-07-29T13:31:28Z) - Learning rich optical embeddings for privacy-preserving lensless image
classification [17.169529483306103]
We exploit the unique multiplexing property of casting the optics as an encoder that produces learned embeddings directly at the camera sensor.
We do so in the context of image classification, where we jointly optimize the encoder's parameters and those of an image classifier in an end-to-end fashion.
Our experiments show that jointly learning the lensless optical encoder and the digital processing allows for lower resolution embeddings at the sensor, and hence better privacy as it is much harder to recover meaningful images from these measurements.
arXiv Detail & Related papers (2022-06-03T07:38:09Z) - Learning Spatially Varying Pixel Exposures for Motion Deblurring [49.07867902677453]
We present a novel approach of leveraging spatially varying pixel exposures for motion deblurring.
Our work illustrates the promising role that focal-plane sensor--processors can play in the future of computational imaging.
arXiv Detail & Related papers (2022-04-14T23:41:49Z) - Ray Tracing-Guided Design of Plenoptic Cameras [1.1421942894219896]
The design of a plenoptic camera requires the combination of two dissimilar optical systems.
We present a method to calculate the remaining aperture, sensor and microlens array parameters under different sets of constraints.
Our ray tracing-based approach is shown to result in models outperforming their pendants generated with the commonly used paraxial approximations.
arXiv Detail & Related papers (2022-03-09T11:57:00Z) - Deep Camera Obscura: An Image Restoration Pipeline for Lensless Pinhole
Photography [18.19703711805033]
pinhole camera is perhaps the earliest and simplest form of an imaging system using only a pinhole-sized aperture in place of a lens.
In this paper, we explore an image restoration pipeline using deep learning and domain-knowledge of the pinhole system to enhance the pinhole image quality through a joint denoise and deblur approach.
Our approach allows for more practical exposure times for hand-held photography and provides higher image quality, making it more suitable for daily photography compared to other lensless cameras while keeping size and cost low.
arXiv Detail & Related papers (2021-08-12T07:03:00Z) - Universal and Flexible Optical Aberration Correction Using Deep-Prior
Based Deconvolution [51.274657266928315]
We propose a PSF aware plug-and-play deep network, which takes the aberrant image and PSF map as input and produces the latent high quality version via incorporating lens-specific deep priors.
Specifically, we pre-train a base model from a set of diverse lenses and then adapt it to a given lens by quickly refining the parameters.
arXiv Detail & Related papers (2021-04-07T12:00:38Z) - Exploiting Raw Images for Real-Scene Super-Resolution [105.18021110372133]
We study the problem of real-scene single image super-resolution to bridge the gap between synthetic data and real captured images.
We propose a method to generate more realistic training data by mimicking the imaging process of digital cameras.
We also develop a two-branch convolutional neural network to exploit the radiance information originally-recorded in raw images.
arXiv Detail & Related papers (2021-02-02T16:10:15Z) - Dynamic Low-light Imaging with Quanta Image Sensors [79.28256402267034]
We propose a solution using Quanta Image Sensors (QIS) and present a new image reconstruction algorithm.
We show that dynamic scenes can be reconstructed from a burst of frames at a photon level of 1 photon per pixel per frame.
arXiv Detail & Related papers (2020-07-16T20:29:52Z) - Rendering Natural Camera Bokeh Effect with Deep Learning [95.86933125733673]
Bokeh is an important artistic effect used to highlight the main object of interest on the photo.
Mobile cameras are unable to produce shallow depth-of-field photos due to a very small aperture diameter of their optics.
We propose to learn a realistic shallow focus technique directly from the photos produced by DSLR cameras.
arXiv Detail & Related papers (2020-06-10T07:28:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.