Parameter estimation for WMTI-Watson model of white matter using
encoder-decoder recurrent neural network
- URL: http://arxiv.org/abs/2203.00595v2
- Date: Wed, 2 Mar 2022 09:26:49 GMT
- Title: Parameter estimation for WMTI-Watson model of white matter using
encoder-decoder recurrent neural network
- Authors: Yujian Diao and Ileana Ozana Jelescu
- Abstract summary: In this study, we evaluate the performance of NLLS, the RNN-based method and a multilayer perceptron (MLP) on datasets rat and human brain.
We showed that the proposed RNN-based fitting approach had the advantage of highly reduced computation time over NLLS.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Biophysical modelling of the diffusion MRI signal provides estimates of
specific microstructural tissue properties. Although nonlinear optimization
such as non-linear least squares (NLLS) is the most widespread method for model
estimation, it suffers from local minima and high computational cost. Deep
Learning approaches are steadily replacing NL fitting, but come with the
limitation that the model needs to be retrained for each acquisition protocol
and noise level. The White Matter Tract Integrity (WMTI)-Watson model was
proposed as an implementation of the Standard Model of diffusion in white
matter that estimates model parameters from the diffusion and kurtosis tensors
(DKI). Here we proposed a deep learning approach based on the encoder-decoder
recurrent neural network (RNN) to increase the robustness and accelerate the
parameter estimation of WMTI-Watson. We use an embedding approach to render the
model insensitive to potential differences in distributions between training
data and experimental data. This RNN-based solver thus has the advantage of
being highly efficient in computation and more readily translatable to other
datasets, irrespective of acquisition protocol and underlying parameter
distributions as long as DKI was pre-computed from the data. In this study, we
evaluated the performance of NLLS, the RNN-based method and a multilayer
perceptron (MLP) on synthetic and in vivo datasets of rat and human brain. We
showed that the proposed RNN-based fitting approach had the advantage of highly
reduced computation time over NLLS (from hours to seconds), with similar
accuracy and precision but improved robustness, and superior translatability to
new datasets over MLP.
Related papers
- A model for multi-attack classification to improve intrusion detection
performance using deep learning approaches [0.0]
The objective here is to create a reliable intrusion detection mechanism to help identify malicious attacks.
Deep learning based solution framework is developed consisting of three approaches.
The first approach is Long-Short Term Memory Recurrent Neural Network (LSTM-RNN) with seven functions such as adamax, SGD, adagrad, adam, RMSprop, nadam and adadelta.
The models self-learnt the features and classifies the attack classes as multi-attack classification.
arXiv Detail & Related papers (2023-10-25T05:38:44Z) - Short-term power load forecasting method based on CNN-SAEDN-Res [12.733504847643005]
This paper presents a short-term load forecasting method based on convolutional neural network (CNN), self-attention encoder-decoder network (SAEDN) and residual-refinement (Res)
The proposed method has advantages in terms of prediction accuracy and prediction stability.
arXiv Detail & Related papers (2023-09-02T11:36:50Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - Bayesian Neural Network Language Modeling for Speech Recognition [59.681758762712754]
State-of-the-art neural network language models (NNLMs) represented by long short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming highly complex.
In this paper, an overarching full Bayesian learning framework is proposed to account for the underlying uncertainty in LSTM-RNN and Transformer LMs.
arXiv Detail & Related papers (2022-08-28T17:50:19Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - Supervised Training of Siamese Spiking Neural Networks with Earth's
Mover Distance [4.047840018793636]
This study adapts the highly-versatile siamese neural network model to the event data domain.
We introduce a supervised training framework for optimizing Earth's Mover Distance between spike trains with spiking neural networks (SNN)
arXiv Detail & Related papers (2022-02-20T00:27:57Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - Compressing LSTM Networks by Matrix Product Operators [7.395226141345625]
Long Short Term Memory(LSTM) models are the building blocks of many state-of-the-art natural language processing(NLP) and speech enhancement(SE) algorithms.
Here we introduce the MPO decomposition, which describes the local correlation of quantum states in quantum many-body physics.
We propose a matrix product operator(MPO) based neural network architecture to replace the LSTM model.
arXiv Detail & Related papers (2020-12-22T11:50:06Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.