AxonNet: A self-supervised Deep Neural Network for Intravoxel Structure
Estimation from DW-MRI
- URL: http://arxiv.org/abs/2103.11006v1
- Date: Fri, 19 Mar 2021 20:11:03 GMT
- Title: AxonNet: A self-supervised Deep Neural Network for Intravoxel Structure
Estimation from DW-MRI
- Authors: Hanna Ehrlich and Mariano Rivera
- Abstract summary: We show that neural networks (DNNs) have the potential to extract information from diffusion-weighted signals to reconstruct cerebral tracts.
We present two DNN models: one that estimates the axonal structure in the form of a voxel and the other to calculate the structure of the central voxel.
- Score: 0.12183405753834559
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: We present a method for estimating intravoxel parameters from a DW-MRI based
on deep learning techniques. We show that neural networks (DNNs) have the
potential to extract information from diffusion-weighted signals to reconstruct
cerebral tracts. We present two DNN models: one that estimates the axonal
structure in the form of a voxel and the other to calculate the structure of
the central voxel using the voxel neighborhood. Our methods are based on a
proposed parameter representation suitable for the problem. Since it is
practically impossible to have real tagged data for any acquisition protocol,
we used a self-supervised strategy. Experiments with synthetic data and real
data show that our approach is competitive, and the computational times show
that our approach is faster than the SOTA methods, even if training times are
considered. This computational advantage increases if we consider the
prediction of multiple images with the same acquisition protocol.
Related papers
- Legged Robot State Estimation With Invariant Extended Kalman Filter
Using Neural Measurement Network [2.0405494347486197]
We develop a state estimation framework that integrates a neural measurement network (NMN) with an invariant extended Kalman filter.
Our approach significantly reduces position drift compared to the existing model-based state estimator.
arXiv Detail & Related papers (2024-02-01T06:06:59Z) - Assessing Neural Network Representations During Training Using
Noise-Resilient Diffusion Spectral Entropy [55.014926694758195]
Entropy and mutual information in neural networks provide rich information on the learning process.
We leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures.
We show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data.
arXiv Detail & Related papers (2023-12-04T01:32:42Z) - Heterogenous Memory Augmented Neural Networks [84.29338268789684]
We introduce a novel heterogeneous memory augmentation approach for neural networks.
By introducing learnable memory tokens with attention mechanism, we can effectively boost performance without huge computational overhead.
We show our approach on various image and graph-based tasks under both in-distribution (ID) and out-of-distribution (OOD) conditions.
arXiv Detail & Related papers (2023-10-17T01:05:28Z) - Patch-CNN: Training data-efficient deep learning for high-fidelity
diffusion tensor estimation from minimal diffusion protocols [3.0416974614291226]
We propose a new method, Patch-CNN, for diffusion tensor (DT) estimation from only six-direction diffusion weighted images (DWI)
Compared with image-wise FCNs, the minimal kernel vastly reduces training data demand.
The improved fibre orientation estimation is shown to produce improved tractogram.
arXiv Detail & Related papers (2023-07-03T20:39:48Z) - Data-driven modelling of brain activity using neural networks, Diffusion
Maps, and the Koopman operator [0.0]
We propose a machine-learning approach to model long-term out-of-sample dynamics of brain activity from task-dependent fMRI data.
We use Diffusion maps (DMs) to discover a set of variables that parametrize the low-dimensional manifold on which the emergent high-dimensional fMRI time series evolve.
We construct reduced-order-models (ROMs) on the embedded manifold via two techniques: Feedforward Neural Networks (FNNs) and the Koopman operator.
arXiv Detail & Related papers (2023-04-24T09:08:12Z) - Parameter estimation for WMTI-Watson model of white matter using
encoder-decoder recurrent neural network [0.0]
In this study, we evaluate the performance of NLLS, the RNN-based method and a multilayer perceptron (MLP) on datasets rat and human brain.
We showed that the proposed RNN-based fitting approach had the advantage of highly reduced computation time over NLLS.
arXiv Detail & Related papers (2022-03-01T16:33:15Z) - Self-Learning for Received Signal Strength Map Reconstruction with
Neural Architecture Search [63.39818029362661]
We present a model based on Neural Architecture Search (NAS) and self-learning for received signal strength ( RSS) map reconstruction.
The approach first finds an optimal NN architecture and simultaneously train the deduced model over some ground-truth measurements of a given ( RSS) map.
Experimental results show that signal predictions of this second model outperforms non-learning based state-of-the-art techniques and NN models with no architecture search.
arXiv Detail & Related papers (2021-05-17T12:19:22Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - Enhancing Fiber Orientation Distributions using convolutional Neural
Networks [0.0]
We learn improved FODs for commercially acquired MRI.
We evaluate patch-based 3D convolutional neural networks (CNNs)
Our approach may enable robust CSD model estimation on single-shell dMRI acquisition protocols.
arXiv Detail & Related papers (2020-08-12T16:06:25Z) - Multi-Tones' Phase Coding (MTPC) of Interaural Time Difference by
Spiking Neural Network [68.43026108936029]
We propose a pure spiking neural network (SNN) based computational model for precise sound localization in the noisy real-world environment.
We implement this algorithm in a real-time robotic system with a microphone array.
The experiment results show a mean error azimuth of 13 degrees, which surpasses the accuracy of the other biologically plausible neuromorphic approach for sound source localization.
arXiv Detail & Related papers (2020-07-07T08:22:56Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.