Light-Field Microscopy for optical imaging of neuronal activity: when
model-based methods meet data-driven approaches
- URL: http://arxiv.org/abs/2110.13142v1
- Date: Sun, 24 Oct 2021 20:58:51 GMT
- Title: Light-Field Microscopy for optical imaging of neuronal activity: when
model-based methods meet data-driven approaches
- Authors: Pingfan Song, Herman Verinaz Jadan, Carmel L. Howe, Amanda J. Foust,
Pier Luigi Dragotti
- Abstract summary: Understanding how networks of neurons process information is one of the key challenges in modern neuroscience.
Light-field microscopy (LFM), a type of scanless microscope, is a particularly attractive candidate for high-speed 3D imaging.
This paper is devoted to a comprehensive survey to state-of-the-art computational methods for LFM, with a focus on model-based and data-driven approaches.
- Score: 28.872219458334587
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Understanding how networks of neurons process information is one of the key
challenges in modern neuroscience. A necessary step to achieve this goal is to
be able to observe the dynamics of large populations of neurons over a large
area of the brain. Light-field microscopy (LFM), a type of scanless microscope,
is a particularly attractive candidate for high-speed three-dimensional (3D)
imaging. It captures volumetric information in a single snapshot, allowing
volumetric imaging at video frame-rates. Specific features of imaging neuronal
activity using LFM call for the development of novel machine learning
approaches that fully exploit priors embedded in physics and optics models.
Signal processing theory and wave-optics theory could play a key role in
filling this gap, and contribute to novel computational methods with enhanced
interpretability and generalization by integrating model-driven and data-driven
approaches. This paper is devoted to a comprehensive survey to state-of-the-art
of computational methods for LFM, with a focus on model-based and data-driven
approaches.
Related papers
- Towards Neural Foundation Models for Vision: Aligning EEG, MEG, and fMRI Representations for Decoding, Encoding, and Modality Conversion [0.11249583407496218]
This paper presents a novel approach towards creating a foundational model for aligning neural data and visual stimuli across multimodal representationsof brain activity by leveraging contrastive learning.
We used electroencephalography (EEG), magnetoencephalography (MEG), and functional magnetic resonance imaging (fMRI) data.
Our framework's capabilities are demonstrated through three key experiments: decoding visual information from neural data, encoding images into neural representations, and converting between neural modalities.
arXiv Detail & Related papers (2024-11-14T12:27:27Z) - Brain3D: Generating 3D Objects from fMRI [76.41771117405973]
We design a novel 3D object representation learning method, Brain3D, that takes as input the fMRI data of a subject.
We show that our model captures the distinct functionalities of each region of human vision system.
Preliminary evaluations indicate that Brain3D can successfully identify the disordered brain regions in simulated scenarios.
arXiv Detail & Related papers (2024-05-24T06:06:11Z) - MindBridge: A Cross-Subject Brain Decoding Framework [60.58552697067837]
Brain decoding aims to reconstruct stimuli from acquired brain signals.
Currently, brain decoding is confined to a per-subject-per-model paradigm.
We present MindBridge, that achieves cross-subject brain decoding by employing only one model.
arXiv Detail & Related papers (2024-04-11T15:46:42Z) - Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing [72.45257414889478]
We aim to reduce human workload by predicting connectivity between over-segmented neuron pieces.
We first construct a dataset, named FlyTracing, that contains millions of pairwise connections of segments expanding the whole fly brain.
We propose a novel connectivity-aware contrastive learning method to generate dense volumetric EM image embedding.
arXiv Detail & Related papers (2024-01-05T19:45:12Z) - fMRI-PTE: A Large-scale fMRI Pretrained Transformer Encoder for
Multi-Subject Brain Activity Decoding [54.17776744076334]
We propose fMRI-PTE, an innovative auto-encoder approach for fMRI pre-training.
Our approach involves transforming fMRI signals into unified 2D representations, ensuring consistency in dimensions and preserving brain activity patterns.
Our contributions encompass introducing fMRI-PTE, innovative data transformation, efficient training, a novel learning strategy, and the universal applicability of our approach.
arXiv Detail & Related papers (2023-11-01T07:24:22Z) - Physics Embedded Machine Learning for Electromagnetic Data Imaging [83.27424953663986]
Electromagnetic (EM) imaging is widely applied in sensing for security, biomedicine, geophysics, and various industries.
It is an ill-posed inverse problem whose solution is usually computationally expensive. Machine learning (ML) techniques and especially deep learning (DL) show potential in fast and accurate imaging.
This article surveys various schemes to incorporate physics in learning-based EM imaging.
arXiv Detail & Related papers (2022-07-26T02:10:15Z) - Ultrasound Signal Processing: From Models to Deep Learning [64.56774869055826]
Medical ultrasound imaging relies heavily on high-quality signal processing to provide reliable and interpretable image reconstructions.
Deep learning based methods, which are optimized in a data-driven fashion, have gained popularity.
A relatively new paradigm combines the power of the two: leveraging data-driven deep learning, as well as exploiting domain knowledge.
arXiv Detail & Related papers (2022-04-09T13:04:36Z) - Neuronal Learning Analysis using Cycle-Consistent Adversarial Networks [4.874780144224057]
We use a variant of deep generative models called - CycleGAN, to learn the unknown mapping between pre- and post-learning neural activities.
We develop an end-to-end pipeline to preprocess, train and evaluate calcium fluorescence signals, and a procedure to interpret the resulting deep learning models.
arXiv Detail & Related papers (2021-11-25T13:24:19Z) - A Geometry-Informed Deep Learning Framework for Ultra-Sparse 3D
Tomographic Image Reconstruction [13.44786774177579]
We establish a geometry-informed deep learning framework for ultra-sparse 3D tomographic image reconstruction.
We demonstrate that the seamless inclusion of known priors is essential to enhance the performance of 3D volumetric computed tomography imaging.
arXiv Detail & Related papers (2021-05-25T06:20:03Z) - Model-inspired Deep Learning for Light-Field Microscopy with Application
to Neuron Localization [27.247818386065894]
We propose a model-inspired deep learning approach to perform fast and robust 3D localization of sources using light-field microscopy images.
This is achieved by developing a deep network that efficiently solves a convolutional sparse coding problem.
Experiments on localization of mammalian neurons from light-fields show that the proposed approach simultaneously provides enhanced performance, interpretability and efficiency.
arXiv Detail & Related papers (2021-03-10T16:24:47Z) - Deep brain state classification of MEG data [2.9048924265579124]
This paper uses Magnetoencephalography (MEG) data, provided by the Human Connectome Project (HCP), in combination with various deep artificial neural network models to perform brain decoding.
arXiv Detail & Related papers (2020-07-02T05:51:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.