Scattering-induced entropy boost for highly-compressed optical sensing and encryption
- URL: http://arxiv.org/abs/2301.06084v2
- Date: Fri, 6 Sep 2024 09:22:17 GMT
- Title: Scattering-induced entropy boost for highly-compressed optical sensing and encryption
- Authors: Xinrui Zhan, Xuyang Chang, Daoyu Li, Rong Yan, Yinuo Zhang, Liheng Bian,
- Abstract summary: Image sensing often relies on a high-quality machine vision system with a large field of view and high resolution.
We propose a novel image-free sensing framework for resource-efficient image classification.
The proposed framework is shown to obtain over a 95% accuracy at sampling rates of 1% and 5% for classification on the MNIST dataset.
- Score: 7.502671257653539
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image sensing often relies on a high-quality machine vision system with a large field of view and high resolution. It requires fine imaging optics, has high computational costs, and requires a large communication bandwidth between image sensors and computing units. In this paper, we propose a novel image-free sensing framework for resource-efficient image classification, where the required number of measurements can be reduced by up to two orders of magnitude. In the proposed framework for single-pixel detection, the optical field for a target is first scattered by an optical diffuser and then two-dimensionally modulated by a spatial light modulator. The optical diffuser simultaneously serves as a compressor and an encryptor for the target information, effectively narrowing the field of view and improving the system's security. The one-dimensional sequence of intensity values, which is measured with time-varying patterns on the spatial light modulator, is then used to extract semantic information based on end-to-end deep learning. The proposed sensing framework is shown to obtain over a 95\% accuracy at sampling rates of 1% and 5% for classification on the MNIST dataset and the recognition of Chinese license plates, respectively, and the framework is up to 24% more efficient than the approach without an optical diffuser. The proposed framework represents a significant breakthrough in high-throughput machine intelligence for scene analysis with low bandwidth, low costs, and strong encryption.
Related papers
- Spatial-frequency Dual-Domain Feature Fusion Network for Low-Light Remote Sensing Image Enhancement [49.15531684596958]
We propose a Dual-Domain Feature Fusion Network (DFFN) for low-light remote sensing image enhancement.
The first phase learns amplitude information to restore image brightness, and the second phase learns phase information to refine details.
We have constructed two dark light remote sensing datasets to address the current lack of datasets in dark light remote sensing image enhancement.
arXiv Detail & Related papers (2024-04-26T13:21:31Z) - Multi-Modal Neural Radiance Field for Monocular Dense SLAM with a
Light-Weight ToF Sensor [58.305341034419136]
We present the first dense SLAM system with a monocular camera and a light-weight ToF sensor.
We propose a multi-modal implicit scene representation that supports rendering both the signals from the RGB camera and light-weight ToF sensor.
Experiments demonstrate that our system well exploits the signals of light-weight ToF sensors and achieves competitive results.
arXiv Detail & Related papers (2023-08-28T07:56:13Z) - Low-Light Image Enhancement with Illumination-Aware Gamma Correction and
Complete Image Modelling Network [69.96295927854042]
Low-light environments usually lead to less informative large-scale dark areas.
We propose to integrate the effectiveness of gamma correction with the strong modelling capacities of deep networks.
Because exponential operation introduces high computational complexity, we propose to use Taylor Series to approximate gamma correction.
arXiv Detail & Related papers (2023-08-16T08:46:51Z) - Learning Kernel-Modulated Neural Representation for Efficient Light
Field Compression [41.24757573290883]
We design a compact neural network representation for the light field compression task.
It is composed of two types of complementary kernels: descriptive kernels (descriptors) that store scene description information learned during training, and modulatory kernels (modulators) that control the rendering of different SAIs from the queried perspectives.
arXiv Detail & Related papers (2023-07-12T12:58:03Z) - Time-lapse image classification using a diffractive neural network [0.0]
We show for the first time a time-lapse image classification scheme using a diffractive network.
We show a blind testing accuracy of 62.03% on the optical classification of objects from the CIFAR-10 dataset.
This constitutes the highest inference accuracy achieved so far using a single diffractive network.
arXiv Detail & Related papers (2022-08-23T08:16:30Z) - All-optical image classification through unknown random diffusers using
a single-pixel diffractive network [13.7472825798265]
classification of an object behind a random and unknown scattering medium sets a challenging task for computational imaging and machine vision fields.
Recent deep learning-based approaches demonstrated the classification of objects using diffuser-distorted patterns collected by an image sensor.
Here, we present an all-optical processor to directly classify unknown objects through unknown, random phase diffusers using broadband illumination detected with a single pixel.
arXiv Detail & Related papers (2022-08-08T08:26:08Z) - A photosensor employing data-driven binning for ultrafast image
recognition [0.0]
Pixel binning is a technique widely used in optical image acquisition and spectroscopy.
Here, we push the concept of binning to its limit by combining a large fraction of the sensor elements into a single superpixel.
For a given pattern recognition task, its optimal shape is determined from training data using a machine learning algorithm.
arXiv Detail & Related papers (2021-11-20T15:38:39Z) - RRNet: Relational Reasoning Network with Parallel Multi-scale Attention
for Salient Object Detection in Optical Remote Sensing Images [82.1679766706423]
Salient object detection (SOD) for optical remote sensing images (RSIs) aims at locating and extracting visually distinctive objects/regions from the optical RSIs.
We propose a relational reasoning network with parallel multi-scale attention for SOD in optical RSIs.
Our proposed RRNet outperforms the existing state-of-the-art SOD competitors both qualitatively and quantitatively.
arXiv Detail & Related papers (2021-10-27T07:18:32Z) - Light Lies: Optical Adversarial Attack [24.831391763610046]
This paper introduces an optical adversarial attack, which physically alters the light field information arriving at the image sensor so that the classification model yields misclassification.
We present experiments based on both simulation and a real hardware optical system, from which the feasibility of the proposed optical attack is demonstrated.
arXiv Detail & Related papers (2021-06-18T04:20:49Z) - Universal and Flexible Optical Aberration Correction Using Deep-Prior
Based Deconvolution [51.274657266928315]
We propose a PSF aware plug-and-play deep network, which takes the aberrant image and PSF map as input and produces the latent high quality version via incorporating lens-specific deep priors.
Specifically, we pre-train a base model from a set of diverse lenses and then adapt it to a given lens by quickly refining the parameters.
arXiv Detail & Related papers (2021-04-07T12:00:38Z) - Correlation Plenoptic Imaging between Arbitrary Planes [52.77024349608834]
We show that the protocol enables to change the focused planes, in post-processing, and to achieve an unprecedented combination of image resolution and depth of field.
Results lead the way towards the development of compact designs for correlation plenoptic imaging devices based on chaotic light, as well as high-SNR plenoptic imaging devices based on entangled photon illumination.
arXiv Detail & Related papers (2020-07-23T14:26:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.