Event-Driven Imaging in Turbid Media: A Confluence of Optoelectronics
and Neuromorphic Computation
- URL: http://arxiv.org/abs/2309.06652v1
- Date: Wed, 13 Sep 2023 00:38:59 GMT
- Title: Event-Driven Imaging in Turbid Media: A Confluence of Optoelectronics
and Neuromorphic Computation
- Authors: Ning Zhang, Timothy Shea, Arto Nurmikko
- Abstract summary: A new optical-computational method is introduced to unveil images of targets whose visibility is severely obscured by light scattering in dense, turbid media.
The scheme is human vision inspired whereby diffuse photons collected from the turbid medium are first transformed to spike trains by a dynamic vision sensor as in the retina.
Image reconstruction is achieved under conditions of turbidity where an original image is unintelligible to the human eye or a digital video camera.
- Score: 9.53078750806038
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper a new optical-computational method is introduced to unveil
images of targets whose visibility is severely obscured by light scattering in
dense, turbid media. The targets of interest are taken to be dynamic in that
their optical properties are time-varying whether stationary in space or
moving. The scheme, to our knowledge the first of its kind, is human vision
inspired whereby diffuse photons collected from the turbid medium are first
transformed to spike trains by a dynamic vision sensor as in the retina, and
image reconstruction is then performed by a neuromorphic computing approach
mimicking the brain. We combine benchtop experimental data in both reflection
(backscattering) and transmission geometries with support from physics-based
simulations to develop a neuromorphic computational model and then apply this
for image reconstruction of different MNIST characters and image sets by a
dedicated deep spiking neural network algorithm. Image reconstruction is
achieved under conditions of turbidity where an original image is
unintelligible to the human eye or a digital video camera, yet clearly and
quantifiable identifiable when using the new neuromorphic computational
approach.
Related papers
- Neuromorphic Optical Tracking and Imaging of Randomly Moving Targets through Strongly Scattering Media [8.480104395572418]
We develop an end-to-end neuromorphic optical engineering and computational approach to track and image normally invisible objects.
Photons emerging from dense scattering media are detected by the event camera and converted to pixel-wise asynchronized spike trains.
We demonstrate tracking and imaging randomly moving objects in dense turbid media as well as image reconstruction of spatially stationary but optically dynamic objects.
arXiv Detail & Related papers (2025-01-07T15:38:13Z) - Intraoperative Registration by Cross-Modal Inverse Neural Rendering [61.687068931599846]
We present a novel approach for 3D/2D intraoperative registration during neurosurgery via cross-modal inverse neural rendering.
Our approach separates implicit neural representation into two components, handling anatomical structure preoperatively and appearance intraoperatively.
We tested our method on retrospective patients' data from clinical cases, showing that our method outperforms state-of-the-art while meeting current clinical standards for registration.
arXiv Detail & Related papers (2024-09-18T13:40:59Z) - Decoding visual brain representations from electroencephalography
through Knowledge Distillation and latent diffusion models [0.12289361708127873]
We present an innovative method that employs to classify and reconstruct images from the ImageNet dataset using electroencephalography (EEG) data.
We analyzed EEG recordings from 6 participants, each exposed to 50 images spanning 40 unique semantic categories.
We incorporated an image reconstruction mechanism based on pre-trained latent diffusion models, which allowed us to generate an estimate of the images which had elicited EEG activity.
arXiv Detail & Related papers (2023-09-08T09:13:50Z) - Physics-Driven Turbulence Image Restoration with Stochastic Refinement [80.79900297089176]
Image distortion by atmospheric turbulence is a critical problem in long-range optical imaging systems.
Fast and physics-grounded simulation tools have been introduced to help the deep-learning models adapt to real-world turbulence conditions.
This paper proposes the Physics-integrated Restoration Network (PiRN) to help the network to disentangle theity from the degradation and the underlying image.
arXiv Detail & Related papers (2023-07-20T05:49:21Z) - Natural scene reconstruction from fMRI signals using generative latent
diffusion [1.90365714903665]
We present a two-stage scene reconstruction framework called Brain-Diffuser''
In the first stage, we reconstruct images that capture low-level properties and overall layout using a VDVAE (Very Deep Vari Autoencoder) model.
In the second stage, we use the image-to-image framework of a latent diffusion model conditioned on predicted multimodal (text and visual) features.
arXiv Detail & Related papers (2023-03-09T15:24:26Z) - Pixelated Reconstruction of Foreground Density and Background Surface
Brightness in Gravitational Lensing Systems using Recurrent Inference
Machines [116.33694183176617]
We use a neural network based on the Recurrent Inference Machine to reconstruct an undistorted image of the background source and the lens mass density distribution as pixelated maps.
When compared to more traditional parametric models, the proposed method is significantly more expressive and can reconstruct complex mass distributions.
arXiv Detail & Related papers (2023-01-10T19:00:12Z) - Adapting Brain-Like Neural Networks for Modeling Cortical Visual
Prostheses [68.96380145211093]
Cortical prostheses are devices implanted in the visual cortex that attempt to restore lost vision by electrically stimulating neurons.
Currently, the vision provided by these devices is limited, and accurately predicting the visual percepts resulting from stimulation is an open challenge.
We propose to address this challenge by utilizing 'brain-like' convolutional neural networks (CNNs), which have emerged as promising models of the visual system.
arXiv Detail & Related papers (2022-09-27T17:33:19Z) - Prune and distill: similar reformatting of image information along rat
visual cortex and deep neural networks [61.60177890353585]
Deep convolutional neural networks (CNNs) have been shown to provide excellent models for its functional analogue in the brain, the ventral stream in visual cortex.
Here we consider some prominent statistical patterns that are known to exist in the internal representations of either CNNs or the visual cortex.
We show that CNNs and visual cortex share a similarly tight relationship between dimensionality expansion/reduction of object representations and reformatting of image information.
arXiv Detail & Related papers (2022-05-27T08:06:40Z) - Scale-, shift- and rotation-invariant diffractive optical networks [0.0]
Diffractive Deep Neural Networks (D2NNs) harness light-matter interaction over a series of trainable surfaces to compute a desired statistical inference task.
Here, we demonstrate a new training strategy for diffractive networks that introduces input object translation, rotation and/or scaling during the training phase.
This training strategy successfully guides the evolution of the diffractive optical network design towards a solution that is scale-, shift- and rotation-invariant.
arXiv Detail & Related papers (2020-10-24T02:18:39Z) - Back to Event Basics: Self-Supervised Learning of Image Reconstruction
for Event Cameras via Photometric Constancy [0.0]
Event cameras are novel vision sensors that sample, in an asynchronous fashion, brightness increments with low latency and high temporal resolution.
We propose a novel, lightweight neural network for optical flow estimation that achieves high speed inference with only a minor drop in performance.
Results across multiple datasets show that the performance of the proposed self-supervised approach is in line with the state-of-the-art.
arXiv Detail & Related papers (2020-09-17T13:30:05Z) - Limited-angle tomographic reconstruction of dense layered objects by
dynamical machine learning [68.9515120904028]
Limited-angle tomography of strongly scattering quasi-transparent objects is a challenging, highly ill-posed problem.
Regularizing priors are necessary to reduce artifacts by improving the condition of such problems.
We devised a recurrent neural network (RNN) architecture with a novel split-convolutional gated recurrent unit (SC-GRU) as the building block.
arXiv Detail & Related papers (2020-07-21T11:48:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.