DIFFnet: Diffusion parameter mapping network generalized for input
diffusion gradient schemes and bvalues
- URL: http://arxiv.org/abs/2102.02463v1
- Date: Thu, 4 Feb 2021 07:45:36 GMT
- Title: DIFFnet: Diffusion parameter mapping network generalized for input
diffusion gradient schemes and bvalues
- Authors: Juhung Park, Woojin Jung, Eun-Jung Choi, Se-Hong Oh, Dongmyung Shin,
Hongjun An, and Jongho Lee
- Abstract summary: A new deep neural network, referred to as DIFFnet, is developed to function as a generalized reconstruction tool of the diffusion-weighted signals.
DIFFnet is evaluated for diffusion tensor imaging (DIFFnetDTI) and for neurite orientation dispersion and density imaging (DIFFnetNODDI)
The results demonstrate accurate reconstruction of the diffusion parameters at substantially reduced processing time.
- Score: 6.7487278071108525
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In MRI, deep neural networks have been proposed to reconstruct diffusion
model parameters. However, the inputs of the networks were designed for a
specific diffusion gradient scheme (i.e., diffusion gradient directions and
numbers) and a specific b-value that are the same as the training data. In this
study, a new deep neural network, referred to as DIFFnet, is developed to
function as a generalized reconstruction tool of the diffusion-weighted signals
for various gradient schemes and b-values. For generalization, diffusion
signals are normalized in a q-space and then projected and quantized, producing
a matrix (Qmatrix) as an input for the network. To demonstrate the validity of
this approach, DIFFnet is evaluated for diffusion tensor imaging (DIFFnetDTI)
and for neurite orientation dispersion and density imaging (DIFFnetNODDI). In
each model, two datasets with different gradient schemes and b-values are
tested. The results demonstrate accurate reconstruction of the diffusion
parameters at substantially reduced processing time (approximately 8.7 times
and 2240 times faster processing time than conventional methods in DTI and
NODDI, respectively; less than 4% mean normalized root-mean-square errors
(NRMSE) in DTI and less than 8% in NODDI). The generalization capability of the
networks was further validated using reduced numbers of diffusion signals from
the datasets. Different from previously proposed deep neural networks, DIFFnet
does not require any specific gradient scheme and b-value for its input. As a
result, it can be adopted as an online reconstruction tool for various complex
diffusion imaging.
Related papers
- Multi-Source and Test-Time Domain Adaptation on Multivariate Signals using Spatio-Temporal Monge Alignment [59.75420353684495]
Machine learning applications on signals such as computer vision or biomedical data often face challenges due to the variability that exists across hardware devices or session recordings.
In this work, we propose Spatio-Temporal Monge Alignment (STMA) to mitigate these variabilities.
We show that STMA leads to significant and consistent performance gains between datasets acquired with very different settings.
arXiv Detail & Related papers (2024-07-19T13:33:38Z) - Adaptive Multilevel Neural Networks for Parametric PDEs with Error Estimation [0.0]
A neural network architecture is presented to solve high-dimensional parameter-dependent partial differential equations (pPDEs)
It is constructed to map parameters of the model data to corresponding finite element solutions.
It outputs a coarse grid solution and a series of corrections as produced in an adaptive finite element method (AFEM)
arXiv Detail & Related papers (2024-03-19T11:34:40Z) - Assessing Neural Network Representations During Training Using
Noise-Resilient Diffusion Spectral Entropy [55.014926694758195]
Entropy and mutual information in neural networks provide rich information on the learning process.
We leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures.
We show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data.
arXiv Detail & Related papers (2023-12-04T01:32:42Z) - Recovering high-quality FODs from a reduced number of diffusion-weighted
images using a model-driven deep learning architecture [0.0]
We propose a model-driven deep learning FOD reconstruction architecture.
It ensures intermediate and output FODs produced by the network are consistent with the input DWI signals.
Our results show that the model-based deep learning architecture achieves competitive performance compared to a state-of-the-art FOD super-resolution network, FOD-Net.
arXiv Detail & Related papers (2023-07-28T02:47:34Z) - Implicit Bias in Leaky ReLU Networks Trained on High-Dimensional Data [63.34506218832164]
In this work, we investigate the implicit bias of gradient flow and gradient descent in two-layer fully-connected neural networks with ReLU activations.
For gradient flow, we leverage recent work on the implicit bias for homogeneous neural networks to show that leakyally, gradient flow produces a neural network with rank at most two.
For gradient descent, provided the random variance is small enough, we show that a single step of gradient descent suffices to drastically reduce the rank of the network, and that the rank remains small throughout training.
arXiv Detail & Related papers (2022-10-13T15:09:54Z) - R2-AD2: Detecting Anomalies by Analysing the Raw Gradient [0.6299766708197883]
We propose a novel semi-supervised anomaly detection method called R2-AD2.
By analysing the temporal distribution of the gradient over multiple training steps, we reliably detect point anomalies.
R2-AD2 works in a purely data-driven way, thus is readily applicable in a variety of important use cases of anomaly detection.
arXiv Detail & Related papers (2022-06-21T11:13:33Z) - Convolutional Neural Network to Restore Low-Dose Digital Breast
Tomosynthesis Projections in a Variance Stabilization Domain [15.149874383250236]
convolution neural network (CNN) proposed to restore low-dose (LD) projections to image quality equivalent to a standard full-dose (FD) acquisition.
Network achieved superior results in terms of the mean squared error (MNSE), normalized training time and noise spatial correlation compared with networks trained with traditional data-driven methods.
arXiv Detail & Related papers (2022-03-22T13:31:47Z) - Diffusion Mechanism in Residual Neural Network: Theory and Applications [12.573746641284849]
In many learning tasks with limited training samples, the diffusion connects the labeled and unlabeled data points.
We propose a novel diffusion residual network (Diff-ResNet) internally introduces diffusion into the architectures of neural networks.
Under the structured data assumption, it is proved that the proposed diffusion block can increase the distance-diameter ratio that improves the separability of inter-class points.
arXiv Detail & Related papers (2021-05-07T10:42:59Z) - Diffusion Earth Mover's Distance and Distribution Embeddings [61.49248071384122]
Diffusion can be computed in $tildeO(n)$ time and is more accurate than similarly fast algorithms such as tree-baseds.
We show Diffusion is fully differentiable, making it amenable to future uses in gradient-descent frameworks such as deep neural networks.
arXiv Detail & Related papers (2021-02-25T13:18:32Z) - Solving Sparse Linear Inverse Problems in Communication Systems: A Deep
Learning Approach With Adaptive Depth [51.40441097625201]
We propose an end-to-end trainable deep learning architecture for sparse signal recovery problems.
The proposed method learns how many layers to execute to emit an output, and the network depth is dynamically adjusted for each task in the inference phase.
arXiv Detail & Related papers (2020-10-29T06:32:53Z) - Multifold Acceleration of Diffusion MRI via Slice-Interleaved Diffusion
Encoding (SIDE) [50.65891535040752]
We propose a diffusion encoding scheme, called Slice-Interleaved Diffusion.
SIDE, that interleaves each diffusion-weighted (DW) image volume with slices encoded with different diffusion gradients.
We also present a method based on deep learning for effective reconstruction of DW images from the highly slice-undersampled data.
arXiv Detail & Related papers (2020-02-25T14:48:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.