Compound eye inspired flat lensless imaging with spatially-coded
Voronoi-Fresnel phase
- URL: http://arxiv.org/abs/2109.13703v1
- Date: Tue, 28 Sep 2021 13:13:58 GMT
- Title: Compound eye inspired flat lensless imaging with spatially-coded
Voronoi-Fresnel phase
- Authors: Qiang Fu, Dong-Ming Yan, and Wolfgang Heidrich
- Abstract summary: We report a lensless camera with spatially-coded Voronoi-Fresnel phase, partly inspired by biological apposition compound eye, to achieve superior image quality.
We demonstrate and verify the imaging performance with a prototype Voronoi-Fresnel lensless camera on a 1.6-megapixel image sensor in various illumination conditions.
- Score: 32.914536774672925
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lensless cameras are a class of imaging devices that shrink the physical
dimensions to the very close vicinity of the image sensor by integrating flat
optics and computational algorithms. Here we report a flat lensless camera with
spatially-coded Voronoi-Fresnel phase, partly inspired by biological apposition
compound eye, to achieve superior image quality. We propose a design principle
of maximizing the information in optics to facilitate the computational
reconstruction. By introducing a Fourier domain metric, Modulation Transfer
Function volume (MTFv), we devise an optimization framework to guide the
optimal design of the optical element. The resulting Voronoi-Fresnel phase
features an irregular array of quasi-Centroidal Voronoi cells containing a base
first-order Fresnel phase function. We demonstrate and verify the imaging
performance with a prototype Voronoi-Fresnel lensless camera on a 1.6-megapixel
image sensor in various illumination conditions. The proposed design could
benefit the development of compact imaging systems working in extreme physical
conditions.
Related papers
- Curved Diffusion: A Generative Model With Optical Geometry Control [56.24220665691974]
The influence of different optical systems on the final scene appearance is frequently overlooked.
This study introduces a framework that intimately integrates a textto-image diffusion model with the particular lens used in image rendering.
arXiv Detail & Related papers (2023-11-29T13:06:48Z) - Thin On-Sensor Nanophotonic Array Cameras [36.981384762023794]
We introduce emphflat nanophotonic computational cameras as an alternative to commodity cameras.
The optical array is embedded on a metasurface that, at 700nm height, is flat and sits on the sensor cover glass at 2.5mm focal distance from the sensor.
We reconstruct a megapixel image from our flat imager with a emphlearned probabilistic reconstruction method that employs a generative diffusion model to sample an implicit prior.
arXiv Detail & Related papers (2023-08-05T06:04:07Z) - Angle Sensitive Pixels for Lensless Imaging on Spherical Sensors [22.329417756084094]
OrbCam is a lensless architecture for imaging with spherical sensors.
We show that the diversity of pixel orientations on a curved surface is sufficient to improve the conditioning of the mapping between the scene and the sensor.
arXiv Detail & Related papers (2023-06-28T06:28:53Z) - High-dimensional quantum correlation measurements with an adaptively
gated hybrid single-photon camera [58.720142291102135]
We propose an adaptively-gated hybrid intensified camera (HIC) that combines a high spatial resolution sensor and a high temporal resolution detector.
With a spatial resolution of nearly 9 megapixels and nanosecond temporal resolution, this system allows for the realization of previously infeasible quantum optics experiments.
arXiv Detail & Related papers (2023-05-25T16:59:27Z) - High Dynamic Range and Super-Resolution from Raw Image Bursts [52.341483902624006]
This paper introduces the first approach to reconstruct high-resolution, high-dynamic range color images from raw photographic bursts captured by a handheld camera with exposure bracketing.
The proposed algorithm is fast, with low memory requirements compared to state-of-the-art learning-based approaches to image restoration.
Experiments demonstrate its excellent performance with super-resolution factors of up to $times 4$ on real photographs taken in the wild with hand-held cameras.
arXiv Detail & Related papers (2022-07-29T13:31:28Z) - Learning rich optical embeddings for privacy-preserving lensless image
classification [17.169529483306103]
We exploit the unique multiplexing property of casting the optics as an encoder that produces learned embeddings directly at the camera sensor.
We do so in the context of image classification, where we jointly optimize the encoder's parameters and those of an image classifier in an end-to-end fashion.
Our experiments show that jointly learning the lensless optical encoder and the digital processing allows for lower resolution embeddings at the sensor, and hence better privacy as it is much harder to recover meaningful images from these measurements.
arXiv Detail & Related papers (2022-06-03T07:38:09Z) - Ray Tracing-Guided Design of Plenoptic Cameras [1.1421942894219896]
The design of a plenoptic camera requires the combination of two dissimilar optical systems.
We present a method to calculate the remaining aperture, sensor and microlens array parameters under different sets of constraints.
Our ray tracing-based approach is shown to result in models outperforming their pendants generated with the commonly used paraxial approximations.
arXiv Detail & Related papers (2022-03-09T11:57:00Z) - Coded Illumination for Improved Lensless Imaging [22.992552346745523]
We propose to use coded illumination to improve the quality of images reconstructed with lensless cameras.
In our imaging model, the scene/object is illuminated by multiple coded illumination patterns as the lensless camera records sensor measurements.
We propose a fast and low-complexity recovery algorithm that exploits the separability and block-diagonal structure in our system.
arXiv Detail & Related papers (2021-11-25T01:22:40Z) - Universal and Flexible Optical Aberration Correction Using Deep-Prior
Based Deconvolution [51.274657266928315]
We propose a PSF aware plug-and-play deep network, which takes the aberrant image and PSF map as input and produces the latent high quality version via incorporating lens-specific deep priors.
Specifically, we pre-train a base model from a set of diverse lenses and then adapt it to a given lens by quickly refining the parameters.
arXiv Detail & Related papers (2021-04-07T12:00:38Z) - Optical Flow Estimation from a Single Motion-blurred Image [66.2061278123057]
Motion blur in an image may have practical interests in fundamental computer vision problems.
We propose a novel framework to estimate optical flow from a single motion-blurred image in an end-to-end manner.
arXiv Detail & Related papers (2021-03-04T12:45:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.