Deep Optical Coding Design in Computational Imaging
- URL: http://arxiv.org/abs/2207.00164v1
- Date: Mon, 27 Jun 2022 04:41:48 GMT
- Title: Deep Optical Coding Design in Computational Imaging
- Authors: Henry Arguello, Jorge Bacca, Hasindu Kariyawasam, Edwin Vargas, Miguel
Marquez, Ramith Hettiarachchi, Hans Garcia, Kithmini Herath, Udith
Haputhanthri, Balpreet Singh Ahluwalia, Peter So, Dushan N. Wadduwage,
Chamira U. S. Edussooriya
- Abstract summary: Computational optical imaging (COI) systems leverage optical coding elements (CE) in their setups to encode a high-dimensional scene in a single or multiple snapshots and decode it by using computational algorithms.
The performance of COI systems highly depends on the design of its main components: the CE pattern and the computational method used to perform a given task.
Deep neural networks (DNNs) have opened a new horizon in CE data-driven designs that jointly consider the optical encoder and computational decoder.
- Score: 16.615106763985942
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computational optical imaging (COI) systems leverage optical coding elements
(CE) in their setups to encode a high-dimensional scene in a single or multiple
snapshots and decode it by using computational algorithms. The performance of
COI systems highly depends on the design of its main components: the CE pattern
and the computational method used to perform a given task. Conventional
approaches rely on random patterns or analytical designs to set the
distribution of the CE. However, the available data and algorithm capabilities
of deep neural networks (DNNs) have opened a new horizon in CE data-driven
designs that jointly consider the optical encoder and computational decoder.
Specifically, by modeling the COI measurements through a fully differentiable
image formation model that considers the physics-based propagation of light and
its interaction with the CEs, the parameters that define the CE and the
computational decoder can be optimized in an end-to-end (E2E) manner. Moreover,
by optimizing just CEs in the same framework, inference tasks can be performed
from pure optics. This work surveys the recent advances on CE data-driven
design and provides guidelines on how to parametrize different optical elements
to include them in the E2E framework. Since the E2E framework can handle
different inference applications by changing the loss function and the DNN, we
present low-level tasks such as spectral imaging reconstruction or high-level
tasks such as pose estimation with privacy preserving enhanced by using optimal
task-based optical architectures. Finally, we illustrate classification and 3D
object recognition applications performed at the speed of the light using
all-optics DNN.
Related papers
- Distilling Knowledge for Designing Computational Imaging Systems [15.662108754691864]
The performance of E2E optimization is significantly reduced by the physical constraints imposed on the encoder.
We reinterpret the concept of knowledge distillation for designing a physically constrained CI system by transferring the knowledge of a pretrained, less-constrained CI system.
Our approach achieves significantly improved reconstruction performance and encoder design, outperforming both E2E optimization and traditional non-data-driven encoder designs.
arXiv Detail & Related papers (2025-01-29T03:49:21Z) - Successive optimization of optics and post-processing with differentiable coherent PSF operator and field information [9.527960631238173]
We introduce a precise optical simulation model, and every operation in pipeline is differentiable.
To efficiently address various degradation, we design a joint optimization procedure that leverages field information.
arXiv Detail & Related papers (2024-12-19T07:49:40Z) - Highly Constrained Coded Aperture Imaging Systems Design Via a Knowledge Distillation Approach [15.662108754691864]
This paper proposes a knowledge distillation (KD) framework for the design of highly physically constrained COI systems.
We validate the proposed approach, using a binary coded apertures single pixel camera for monochromatic and multispectral image reconstruction.
arXiv Detail & Related papers (2024-06-25T23:03:48Z) - Global Search Optics: Automatically Exploring Optimal Solutions to Compact Computational Imaging Systems [15.976326291076377]
The popularity of mobile vision creates a demand for advanced compact computational imaging systems.
Joint design pipelines come to the forefront, where the two significant components are simultaneously optimized via data-driven learning.
In this work, we present Global Search Optimization (GSO) to design compact computational imaging systems.
arXiv Detail & Related papers (2024-04-30T01:59:25Z) - Optical Quantum Sensing for Agnostic Environments via Deep Learning [59.088205627308]
We introduce an innovative Deep Learning-based Quantum Sensing scheme.
It enables optical quantum sensors to attain Heisenberg limit (HL) in agnostic environments.
Our findings offer a new lens through which to accelerate optical quantum sensing tasks.
arXiv Detail & Related papers (2023-11-13T09:46:05Z) - Neural Lithography: Close the Design-to-Manufacturing Gap in
Computational Optics with a 'Real2Sim' Learned Photolithography Simulator [2.033983045970252]
We introduce neural lithography to address the 'design-to-manufacturing' gap in computational optics.
We propose a fully differentiable design framework that integrates a pre-trained photolithography simulator into the model-based optical design loop.
arXiv Detail & Related papers (2023-09-29T15:50:26Z) - Multitask AET with Orthogonal Tangent Regularity for Dark Object
Detection [84.52197307286681]
We propose a novel multitask auto encoding transformation (MAET) model to enhance object detection in a dark environment.
In a self-supervision manner, the MAET learns the intrinsic visual structure by encoding and decoding the realistic illumination-degrading transformation.
We have achieved the state-of-the-art performance using synthetic and real-world datasets.
arXiv Detail & Related papers (2022-05-06T16:27:14Z) - All-optical graph representation learning using integrated diffractive
photonic computing units [51.15389025760809]
Photonic neural networks perform brain-inspired computations using photons instead of electrons.
We propose an all-optical graph representation learning architecture, termed diffractive graph neural network (DGNN)
We demonstrate the use of DGNN extracted features for node and graph-level classification tasks with benchmark databases and achieve superior performance.
arXiv Detail & Related papers (2022-04-23T02:29:48Z) - Learning Deep Context-Sensitive Decomposition for Low-Light Image
Enhancement [58.72667941107544]
A typical framework is to simultaneously estimate the illumination and reflectance, but they disregard the scene-level contextual information encapsulated in feature spaces.
We develop a new context-sensitive decomposition network architecture to exploit the scene-level contextual dependencies on spatial scales.
We develop a lightweight CSDNet (named LiteCSDNet) by reducing the number of channels.
arXiv Detail & Related papers (2021-12-09T06:25:30Z) - Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo [61.6260594326246]
We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
arXiv Detail & Related papers (2021-03-22T18:06:58Z) - Rapid characterisation of linear-optical networks via PhaseLift [51.03305009278831]
Integrated photonics offers great phase-stability and can rely on the large scale manufacturability provided by the semiconductor industry.
New devices, based on such optical circuits, hold the promise of faster and energy-efficient computations in machine learning applications.
We present a novel technique to reconstruct the transfer matrix of linear optical networks.
arXiv Detail & Related papers (2020-10-01T16:04:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.