Thin On-Sensor Nanophotonic Array Cameras
- URL: http://arxiv.org/abs/2308.02797v1
- Date: Sat, 5 Aug 2023 06:04:07 GMT
- Title: Thin On-Sensor Nanophotonic Array Cameras
- Authors: Praneeth Chakravarthula, Jipeng Sun, Xiao Li, Chenyang Lei, Gene Chou,
Mario Bijelic, Johannes Froesch, Arka Majumdar, Felix Heide
- Abstract summary: We introduce emphflat nanophotonic computational cameras as an alternative to commodity cameras.
The optical array is embedded on a metasurface that, at 700nm height, is flat and sits on the sensor cover glass at 2.5mm focal distance from the sensor.
We reconstruct a megapixel image from our flat imager with a emphlearned probabilistic reconstruction method that employs a generative diffusion model to sample an implicit prior.
- Score: 36.981384762023794
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Today's commodity camera systems rely on compound optics to map light
originating from the scene to positions on the sensor where it gets recorded as
an image. To record images without optical aberrations, i.e., deviations from
Gauss' linear model of optics, typical lens systems introduce increasingly
complex stacks of optical elements which are responsible for the height of
existing commodity cameras. In this work, we investigate \emph{flat
nanophotonic computational cameras} as an alternative that employs an array of
skewed lenslets and a learned reconstruction approach. The optical array is
embedded on a metasurface that, at 700~nm height, is flat and sits on the
sensor cover glass at 2.5~mm focal distance from the sensor. To tackle the
highly chromatic response of a metasurface and design the array over the entire
sensor, we propose a differentiable optimization method that continuously
samples over the visible spectrum and factorizes the optical modulation for
different incident fields into individual lenses. We reconstruct a megapixel
image from our flat imager with a \emph{learned probabilistic reconstruction}
method that employs a generative diffusion model to sample an implicit prior.
To tackle \emph{scene-dependent aberrations in broadband}, we propose a method
for acquiring paired captured training data in varying illumination conditions.
We assess the proposed flat camera design in simulation and with an
experimental prototype, validating that the method is capable of recovering
images from diverse scenes in broadband with a single nanophotonic layer.
Related papers
- bit2bit: 1-bit quanta video reconstruction via self-supervised photon prediction [57.199618102578576]
We propose bit2bit, a new method for reconstructing high-quality image stacks at original resolution from sparse binary quantatemporal image data.
Inspired by recent work on Poisson denoising, we developed an algorithm that creates a dense image sequence from sparse binary photon data.
We present a novel dataset containing a wide range of real SPAD high-speed videos under various challenging imaging conditions.
arXiv Detail & Related papers (2024-10-30T17:30:35Z) - Redundancy-Aware Camera Selection for Indoor Scene Neural Rendering [54.468355408388675]
We build a similarity matrix that incorporates both the spatial diversity of the cameras and the semantic variation of the images.
We apply a diversity-based sampling algorithm to optimize the camera selection.
We also develop a new dataset, IndoorTraj, which includes long and complex camera movements captured by humans in virtual indoor environments.
arXiv Detail & Related papers (2024-09-11T08:36:49Z) - Computational Optics for Mobile Terminals in Mass Production [17.413494778377565]
We construct the perturbed lens system model to illustrate the relationship between the system parameters and the deviated frequency response measured from photographs.
An optimization framework is proposed based on this model to build proxy cameras from the machining samples' SFRs.
Engaging with the proxy cameras, we synthetic data pairs, which encode the optical aberrations and the random manufacturing biases, for training the aberration-based algorithms.
arXiv Detail & Related papers (2023-05-10T04:17:33Z) - Learning rich optical embeddings for privacy-preserving lensless image
classification [17.169529483306103]
We exploit the unique multiplexing property of casting the optics as an encoder that produces learned embeddings directly at the camera sensor.
We do so in the context of image classification, where we jointly optimize the encoder's parameters and those of an image classifier in an end-to-end fashion.
Our experiments show that jointly learning the lensless optical encoder and the digital processing allows for lower resolution embeddings at the sensor, and hence better privacy as it is much harder to recover meaningful images from these measurements.
arXiv Detail & Related papers (2022-06-03T07:38:09Z) - Coded Illumination for Improved Lensless Imaging [22.992552346745523]
We propose to use coded illumination to improve the quality of images reconstructed with lensless cameras.
In our imaging model, the scene/object is illuminated by multiple coded illumination patterns as the lensless camera records sensor measurements.
We propose a fast and low-complexity recovery algorithm that exploits the separability and block-diagonal structure in our system.
arXiv Detail & Related papers (2021-11-25T01:22:40Z) - Compound eye inspired flat lensless imaging with spatially-coded
Voronoi-Fresnel phase [32.914536774672925]
We report a lensless camera with spatially-coded Voronoi-Fresnel phase, partly inspired by biological apposition compound eye, to achieve superior image quality.
We demonstrate and verify the imaging performance with a prototype Voronoi-Fresnel lensless camera on a 1.6-megapixel image sensor in various illumination conditions.
arXiv Detail & Related papers (2021-09-28T13:13:58Z) - Universal and Flexible Optical Aberration Correction Using Deep-Prior
Based Deconvolution [51.274657266928315]
We propose a PSF aware plug-and-play deep network, which takes the aberrant image and PSF map as input and produces the latent high quality version via incorporating lens-specific deep priors.
Specifically, we pre-train a base model from a set of diverse lenses and then adapt it to a given lens by quickly refining the parameters.
arXiv Detail & Related papers (2021-04-07T12:00:38Z) - PlenoptiCam v1.0: A light-field imaging framework [8.467466998915018]
Light-field cameras play a vital role for rich 3-D information retrieval in narrow range depth sensing applications.
Key obstacle in composing light-fields from exposures taken by a plenoptic camera is to calibrate computationally, align and rearrange four-dimensional image data.
Several attempts have been proposed to enhance the overall image quality by tailoring pipelines dedicated to particular plenoptic cameras.
arXiv Detail & Related papers (2020-10-14T09:23:18Z) - Learning Light Field Angular Super-Resolution via a Geometry-Aware
Network [101.59693839475783]
We propose an end-to-end learning-based approach aiming at angularly super-resolving a sparsely-sampled light field with a large baseline.
Our method improves the PSNR of the second best method up to 2 dB in average, while saves the execution time 48$times$.
arXiv Detail & Related papers (2020-02-26T02:36:57Z) - Multi-View Photometric Stereo: A Robust Solution and Benchmark Dataset
for Spatially Varying Isotropic Materials [65.95928593628128]
We present a method to capture both 3D shape and spatially varying reflectance with a multi-view photometric stereo technique.
Our algorithm is suitable for perspective cameras and nearby point light sources.
arXiv Detail & Related papers (2020-01-18T12:26:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.