Model-inspired Deep Learning for Light-Field Microscopy with Application
to Neuron Localization
- URL: http://arxiv.org/abs/2103.06164v1
- Date: Wed, 10 Mar 2021 16:24:47 GMT
- Title: Model-inspired Deep Learning for Light-Field Microscopy with Application
to Neuron Localization
- Authors: Pingfan Song, Herman Verinaz Jadan, Carmel L. Howe, Peter Quicke,
Amanda J. Foust, Pier Luigi Dragotti
- Abstract summary: We propose a model-inspired deep learning approach to perform fast and robust 3D localization of sources using light-field microscopy images.
This is achieved by developing a deep network that efficiently solves a convolutional sparse coding problem.
Experiments on localization of mammalian neurons from light-fields show that the proposed approach simultaneously provides enhanced performance, interpretability and efficiency.
- Score: 27.247818386065894
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Light-field microscopes are able to capture spatial and angular information
of incident light rays. This allows reconstructing 3D locations of neurons from
a single snap-shot.In this work, we propose a model-inspired deep learning
approach to perform fast and robust 3D localization of sources using
light-field microscopy images. This is achieved by developing a deep network
that efficiently solves a convolutional sparse coding (CSC) problem to map
Epipolar Plane Images (EPI) to corresponding sparse codes. The network
architecture is designed systematically by unrolling the convolutional
Iterative Shrinkage and Thresholding Algorithm (ISTA) while the network
parameters are learned from a training dataset. Such principled design enables
the deep network to leverage both domain knowledge implied in the model, as
well as new parameters learned from the data, thereby combining advantages of
model-based and learning-based methods. Practical experiments on localization
of mammalian neurons from light-fields show that the proposed approach
simultaneously provides enhanced performance, interpretability and efficiency.
Related papers
- Convergence Analysis for Deep Sparse Coding via Convolutional Neural Networks [7.956678963695681]
We introduce a novel class of Deep Sparse Coding (DSC) models.
We derive convergence rates for CNNs in their ability to extract sparse features.
Inspired by the strong connection between sparse coding and CNNs, we explore training strategies to encourage neural networks to learn more sparse features.
arXiv Detail & Related papers (2024-08-10T12:43:55Z) - IPoD: Implicit Field Learning with Point Diffusion for Generalizable 3D Object Reconstruction from Single RGB-D Images [50.4538089115248]
Generalizable 3D object reconstruction from single-view RGB-D images remains a challenging task.
We propose a novel approach, IPoD, which harmonizes implicit field learning with point diffusion.
Experiments conducted on the CO3D-v2 dataset affirm the superiority of IPoD, achieving 7.8% improvement in F-score and 28.6% in Chamfer distance over existing methods.
arXiv Detail & Related papers (2024-03-30T07:17:37Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - Untrained, physics-informed neural networks for structured illumination
microscopy [0.456877715768796]
We show that we can combine a deep neural network with the forward model of the structured illumination process to reconstruct sub-diffraction images without training data.
The resulting physics-informed neural network (PINN) can be optimized on a single set of diffraction limited sub-images.
arXiv Detail & Related papers (2022-07-15T19:02:07Z) - 3D Convolutional with Attention for Action Recognition [6.238518976312625]
Current action recognition methods use computationally expensive models for learning-temporal dependencies of the action.
This paper proposes a deep neural network architecture for learning such dependencies consisting of a 3D convolutional layer, fully connected layers and attention layer.
The method first learns spatial features and temporal of actions through 3D-CNN, and then the attention temporal mechanism helps the model to locate attention to essential features.
arXiv Detail & Related papers (2022-06-05T15:12:57Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Localized Persistent Homologies for more Effective Deep Learning [60.78456721890412]
We introduce an approach that relies on a new filtration function to account for location during network training.
We demonstrate experimentally on 2D images of roads and 3D image stacks of neuronal processes that networks trained in this manner are better at recovering the topology of the curvilinear structures they extract.
arXiv Detail & Related papers (2021-10-12T19:28:39Z) - Compressive spectral image classification using 3D coded convolutional
neural network [12.67293744927537]
This paper develops a novel deep learning HIC approach based on measurements of coded-aperture snapshot spectral imagers (CASSI)
A new kind of deep learning strategy, namely 3D coded convolutional neural network (3D-CCNN), is proposed to efficiently solve for the classification problem.
The accuracy of classification is effectively improved by exploiting the synergy between the deep learning network and coded apertures.
arXiv Detail & Related papers (2020-09-23T15:05:57Z) - LodoNet: A Deep Neural Network with 2D Keypoint Matchingfor 3D LiDAR
Odometry Estimation [22.664095688406412]
We propose to transfer the LiDAR frames to image space and reformulate the problem as image feature extraction.
With the help of scale-invariant feature transform (SIFT) for feature extraction, we are able to generate matched keypoint pairs (MKPs)
A convolutional neural network pipeline is designed for LiDAR odometry estimation by extracted MKPs.
The proposed scheme, namely LodoNet, is then evaluated in the KITTI odometry estimation benchmark, achieving on par with or even better results than the state-of-the-art.
arXiv Detail & Related papers (2020-09-01T01:09:41Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z) - Spatial-Spectral Residual Network for Hyperspectral Image
Super-Resolution [82.1739023587565]
We propose a novel spectral-spatial residual network for hyperspectral image super-resolution (SSRNet)
Our method can effectively explore spatial-spectral information by using 3D convolution instead of 2D convolution, which enables the network to better extract potential information.
In each unit, we employ spatial and temporal separable 3D convolution to extract spatial and spectral information, which not only reduces unaffordable memory usage and high computational cost, but also makes the network easier to train.
arXiv Detail & Related papers (2020-01-14T03:34:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.