Learning rich optical embeddings for privacy-preserving lensless image
classification
- URL: http://arxiv.org/abs/2206.01429v1
- Date: Fri, 3 Jun 2022 07:38:09 GMT
- Title: Learning rich optical embeddings for privacy-preserving lensless image
classification
- Authors: Eric Bezzam, Martin Vetterli, Matthieu Simeoni
- Abstract summary: We exploit the unique multiplexing property of casting the optics as an encoder that produces learned embeddings directly at the camera sensor.
We do so in the context of image classification, where we jointly optimize the encoder's parameters and those of an image classifier in an end-to-end fashion.
Our experiments show that jointly learning the lensless optical encoder and the digital processing allows for lower resolution embeddings at the sensor, and hence better privacy as it is much harder to recover meaningful images from these measurements.
- Score: 17.169529483306103
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: By replacing the lens with a thin optical element, lensless imaging enables
new applications and solutions beyond those supported by traditional camera
design and post-processing, e.g. compact and lightweight form factors and
visual privacy. The latter arises from the highly multiplexed measurements of
lensless cameras, which require knowledge of the imaging system to recover a
recognizable image. In this work, we exploit this unique multiplexing property:
casting the optics as an encoder that produces learned embeddings directly at
the camera sensor. We do so in the context of image classification, where we
jointly optimize the encoder's parameters and those of an image classifier in
an end-to-end fashion. Our experiments show that jointly learning the lensless
optical encoder and the digital processing allows for lower resolution
embeddings at the sensor, and hence better privacy as it is much harder to
recover meaningful images from these measurements. Additional experiments show
that such an optimization allows for lensless measurements that are more robust
to typical real-world image transformations. While this work focuses on
classification, the proposed programmable lensless camera and end-to-end
optimization can be applied to other computational imaging tasks.
Related papers
- Thin On-Sensor Nanophotonic Array Cameras [36.981384762023794]
We introduce emphflat nanophotonic computational cameras as an alternative to commodity cameras.
The optical array is embedded on a metasurface that, at 700nm height, is flat and sits on the sensor cover glass at 2.5mm focal distance from the sensor.
We reconstruct a megapixel image from our flat imager with a emphlearned probabilistic reconstruction method that employs a generative diffusion model to sample an implicit prior.
arXiv Detail & Related papers (2023-08-05T06:04:07Z) - Neural Lens Modeling [50.57409162437732]
NeuroLens is a neural lens model for distortion and vignetting that can be used for point projection and ray casting.
It can be used to perform pre-capture calibration using classical calibration targets, and can later be used to perform calibration or refinement during 3D reconstruction.
The model generalizes across many lens types and is trivial to integrate into existing 3D reconstruction and rendering systems.
arXiv Detail & Related papers (2023-04-10T20:09:17Z) - The Differentiable Lens: Compound Lens Search over Glass Surfaces and
Materials for Object Detection [42.00621716076439]
Most camera lens systems are designed in isolation, separately from downstream computer methods.
We propose an optimization strategy to address the challenges to lens design.
Specifically, we introduce quantized glass variables to facilitate the optimization of glass materials in an end-to-end context.
arXiv Detail & Related papers (2022-12-08T18:01:17Z) - Controllable Image Enhancement [66.18525728881711]
We present a semiautomatic image enhancement algorithm that can generate high-quality images with multiple styles by controlling a few parameters.
An encoder-decoder framework encodes the retouching skills into latent codes and decodes them into the parameters of image signal processing functions.
arXiv Detail & Related papers (2022-06-16T23:54:53Z) - Coded Illumination for Improved Lensless Imaging [22.992552346745523]
We propose to use coded illumination to improve the quality of images reconstructed with lensless cameras.
In our imaging model, the scene/object is illuminated by multiple coded illumination patterns as the lensless camera records sensor measurements.
We propose a fast and low-complexity recovery algorithm that exploits the separability and block-diagonal structure in our system.
arXiv Detail & Related papers (2021-11-25T01:22:40Z) - Compound eye inspired flat lensless imaging with spatially-coded
Voronoi-Fresnel phase [32.914536774672925]
We report a lensless camera with spatially-coded Voronoi-Fresnel phase, partly inspired by biological apposition compound eye, to achieve superior image quality.
We demonstrate and verify the imaging performance with a prototype Voronoi-Fresnel lensless camera on a 1.6-megapixel image sensor in various illumination conditions.
arXiv Detail & Related papers (2021-09-28T13:13:58Z) - How to Calibrate Your Event Camera [58.80418612800161]
We propose a generic event camera calibration framework using image reconstruction.
We show that neural-network-based image reconstruction is well suited for the task of intrinsic and extrinsic calibration of event cameras.
arXiv Detail & Related papers (2021-05-26T07:06:58Z) - Universal and Flexible Optical Aberration Correction Using Deep-Prior
Based Deconvolution [51.274657266928315]
We propose a PSF aware plug-and-play deep network, which takes the aberrant image and PSF map as input and produces the latent high quality version via incorporating lens-specific deep priors.
Specifically, we pre-train a base model from a set of diverse lenses and then adapt it to a given lens by quickly refining the parameters.
arXiv Detail & Related papers (2021-04-07T12:00:38Z) - Time-Multiplexed Coded Aperture Imaging: Learned Coded Aperture and
Pixel Exposures for Compressive Imaging Systems [56.154190098338965]
We show that our proposed time multiplexed coded aperture (TMCA) can be optimized end-to-end.
TMCA induces better coded snapshots enabling superior reconstructions in two different applications: compressive light field imaging and hyperspectral imaging.
This codification outperforms the state-of-the-art compressive imaging systems by more than 4dB in those applications.
arXiv Detail & Related papers (2021-04-06T22:42:34Z) - Optical Flow Estimation from a Single Motion-blurred Image [66.2061278123057]
Motion blur in an image may have practical interests in fundamental computer vision problems.
We propose a novel framework to estimate optical flow from a single motion-blurred image in an end-to-end manner.
arXiv Detail & Related papers (2021-03-04T12:45:18Z) - PlenoptiCam v1.0: A light-field imaging framework [8.467466998915018]
Light-field cameras play a vital role for rich 3-D information retrieval in narrow range depth sensing applications.
Key obstacle in composing light-fields from exposures taken by a plenoptic camera is to calibrate computationally, align and rearrange four-dimensional image data.
Several attempts have been proposed to enhance the overall image quality by tailoring pipelines dedicated to particular plenoptic cameras.
arXiv Detail & Related papers (2020-10-14T09:23:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.