Compute-first optical detection for noise-resilient visual perception
- URL: http://arxiv.org/abs/2403.09612v1
- Date: Thu, 14 Mar 2024 17:51:38 GMT
- Title: Compute-first optical detection for noise-resilient visual perception
- Authors: Jungmin Kim, Nanfang Yu, Zongfu Yu,
- Abstract summary: We propose a concept of optical signal processing before detection to address this issue.
We demonstrate that spatially redistributing optical signals through a properly designed linear transformer can enhance the detection noise resilience of visual perception tasks.
This compute-first detection scheme can pave the way for advancing infrared machine vision technologies widely used for industrial and defense applications.
- Score: 0.5325390073522079
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the context of visual perception, the optical signal from a scene is transferred into the electronic domain by detectors in the form of image data, which are then processed for the extraction of visual information. In noisy and weak-signal environments such as thermal imaging for night vision applications, however, the performance of neural computing tasks faces a significant bottleneck due to the inherent degradation of data quality upon noisy detection. Here, we propose a concept of optical signal processing before detection to address this issue. We demonstrate that spatially redistributing optical signals through a properly designed linear transformer can enhance the detection noise resilience of visual perception tasks, as benchmarked with the MNIST classification. Our idea is supported by a quantitative analysis detailing the relationship between signal concentration and noise robustness, as well as its practical implementation in an incoherent imaging system. This compute-first detection scheme can pave the way for advancing infrared machine vision technologies widely used for industrial and defense applications.
Related papers
- EvMic: Event-based Non-contact sound recovery from effective spatial-temporal modeling [69.96729022219117]
When sound waves hit an object, they induce vibrations that produce high-frequency and subtle visual changes.
Recent advances in event camera hardware show good potential for its application in visual sound recovery.
We propose a novel pipeline for non-contact sound recovery, fully utilizing spatial-temporal information from the event stream.
arXiv Detail & Related papers (2025-04-03T08:51:17Z) - Understanding and Improving Training-Free AI-Generated Image Detections with Vision Foundation Models [68.90917438865078]
Deepfake techniques for facial synthesis and editing pose serious risks for generative models.
In this paper, we investigate how detection performance varies across model backbones, types, and datasets.
We introduce Contrastive Blur, which enhances performance on facial images, and MINDER, which addresses noise type bias, balancing performance across domains.
arXiv Detail & Related papers (2024-11-28T13:04:45Z) - Generalizable Non-Line-of-Sight Imaging with Learnable Physical Priors [52.195637608631955]
Non-line-of-sight (NLOS) imaging has attracted increasing attention due to its potential applications.
Existing NLOS reconstruction approaches are constrained by the reliance on empirical physical priors.
We introduce a novel learning-based solution, comprising two key designs: Learnable Path Compensation (LPC) and Adaptive Phasor Field (APF)
arXiv Detail & Related papers (2024-09-21T04:39:45Z) - Quantifying Noise of Dynamic Vision Sensor [49.665407116447454]
Dynamic visual sensors (DVS) are characterised by a large amount of background activity (BA) noise.
It is difficult to distinguish between noise and the cleaned sensor signals using standard image processing techniques.
A new technique is presented to characterise BA noise derived from the Detrended Fluctuation Analysis (DFA)
arXiv Detail & Related papers (2024-04-02T13:43:08Z) - Toward deep-learning-assisted spectrally-resolved imaging of magnetic
noise [52.77024349608834]
We implement a deep neural network to efficiently reconstruct the spectral density of the underlying fluctuating magnetic field.
These results create opportunities for the application of machine-learning methods to color-center-based nanoscale sensing and imaging.
arXiv Detail & Related papers (2022-08-01T19:18:26Z) - Neuromorphic Camera Denoising using Graph Neural Network-driven
Transformers [3.805262583092311]
Neuromorphic vision is a bio-inspired technology that has triggered a paradigm shift in the computer-vision community.
Neuromorphic cameras suffer from significant amounts of measurement noise.
This noise deteriorates the performance of neuromorphic event-based perception and navigation algorithms.
arXiv Detail & Related papers (2021-12-17T18:57:36Z) - A photosensor employing data-driven binning for ultrafast image
recognition [0.0]
Pixel binning is a technique widely used in optical image acquisition and spectroscopy.
Here, we push the concept of binning to its limit by combining a large fraction of the sensor elements into a single superpixel.
For a given pattern recognition task, its optimal shape is determined from training data using a machine learning algorithm.
arXiv Detail & Related papers (2021-11-20T15:38:39Z) - Convolutional Deep Denoising Autoencoders for Radio Astronomical Images [0.0]
We apply a Machine Learning technique known as Convolutional Denoising Autoencoder to denoise synthetic images of state-of-the-art radio telescopes.
Our autoencoder can effectively denoise complex images identifying and extracting faint objects at the limits of the instrumental sensitivity.
arXiv Detail & Related papers (2021-10-16T17:08:30Z) - Spatial-Phase Shallow Learning: Rethinking Face Forgery Detection in
Frequency Domain [88.7339322596758]
We present a novel Spatial-Phase Shallow Learning (SPSL) method, which combines spatial image and phase spectrum to capture the up-sampling artifacts of face forgery.
SPSL can achieve the state-of-the-art performance on cross-datasets evaluation as well as multi-class classification and obtain comparable results on single dataset evaluation.
arXiv Detail & Related papers (2021-03-02T16:45:08Z) - The Benefit of Distraction: Denoising Remote Vitals Measurements using
Inverse Attention [25.285955440420594]
We present an approach that exploits the idea that statistics of noise may be shared between the regions that contain the signal of interest.
Our technique uses the inverse of an attention mask to generate a noise estimate that is then used to denoise temporal observations.
We show that this approach produces state-of-the-art results, increasing the signal-to-noise ratio by up to 5.8 dB.
arXiv Detail & Related papers (2020-10-14T13:51:33Z) - Object-based Illumination Estimation with Rendering-aware Neural
Networks [56.01734918693844]
We present a scheme for fast environment light estimation from the RGBD appearance of individual objects and their local image areas.
With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene.
arXiv Detail & Related papers (2020-08-06T08:23:19Z) - Class-Specific Blind Deconvolutional Phase Retrieval Under a Generative
Prior [8.712404218757733]
The problem arises in various imaging modalities such as Fourier ptychography, X-ray crystallography, and in visible light communication.
We propose to solve this inverse problem using alternating gradient descent algorithm under two pretrained deep generative networks as priors.
The proposed recovery algorithm strives to find a sharp image and a blur kernel in the range of the respective pre-generators that textitbest explain the forward measurement model.
arXiv Detail & Related papers (2020-02-28T07:36:28Z) - Variational Denoising Network: Toward Blind Noise Modeling and Removal [59.36166491196973]
Blind image denoising is an important yet very challenging problem in computer vision.
We propose a new variational inference method, which integrates both noise estimation and image denoising.
arXiv Detail & Related papers (2019-08-29T15:54:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.