Patch-CNN: Training data-efficient deep learning for high-fidelity
diffusion tensor estimation from minimal diffusion protocols
- URL: http://arxiv.org/abs/2307.01346v1
- Date: Mon, 3 Jul 2023 20:39:48 GMT
- Title: Patch-CNN: Training data-efficient deep learning for high-fidelity
diffusion tensor estimation from minimal diffusion protocols
- Authors: Tobias Goodwin-Allcock, Ting Gong, Robert Gray, Parashkev Nachev and
Hui Zhang
- Abstract summary: We propose a new method, Patch-CNN, for diffusion tensor (DT) estimation from only six-direction diffusion weighted images (DWI)
Compared with image-wise FCNs, the minimal kernel vastly reduces training data demand.
The improved fibre orientation estimation is shown to produce improved tractogram.
- Score: 3.0416974614291226
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We propose a new method, Patch-CNN, for diffusion tensor (DT) estimation from
only six-direction diffusion weighted images (DWI). Deep learning-based methods
have been recently proposed for dMRI parameter estimation, using either
voxel-wise fully-connected neural networks (FCN) or image-wise convolutional
neural networks (CNN). In the acute clinical context -- where pressure of time
limits the number of imaged directions to a minimum -- existing approaches
either require an infeasible number of training images volumes (image-wise
CNNs), or do not estimate the fibre orientations (voxel-wise FCNs) required for
tractogram estimation. To overcome these limitations, we propose Patch-CNN, a
neural network with a minimal (non-voxel-wise) convolutional kernel
(3$\times$3$\times$3). Compared with voxel-wise FCNs, this has the advantage of
allowing the network to leverage local anatomical information. Compared with
image-wise CNNs, the minimal kernel vastly reduces training data demand.
Evaluated against both conventional model fitting and a voxel-wise FCN,
Patch-CNN, trained with a single subject is shown to improve the estimation of
both scalar dMRI parameters and fibre orientation from six-direction DWIs. The
improved fibre orientation estimation is shown to produce improved tractogram.
Related papers
- Speed Limits for Deep Learning [67.69149326107103]
Recent advancement in thermodynamics allows bounding the speed at which one can go from the initial weight distribution to the final distribution of the fully trained network.
We provide analytical expressions for these speed limits for linear and linearizable neural networks.
Remarkably, given some plausible scaling assumptions on the NTK spectra and spectral decomposition of the labels -- learning is optimal in a scaling sense.
arXiv Detail & Related papers (2023-07-27T06:59:46Z) - Deep Multi-Threshold Spiking-UNet for Image Processing [51.88730892920031]
This paper introduces the novel concept of Spiking-UNet for image processing, which combines the power of Spiking Neural Networks (SNNs) with the U-Net architecture.
To achieve an efficient Spiking-UNet, we face two primary challenges: ensuring high-fidelity information propagation through the network via spikes and formulating an effective training strategy.
Experimental results show that, on image segmentation and denoising, our Spiking-UNet achieves comparable performance to its non-spiking counterpart.
arXiv Detail & Related papers (2023-07-20T16:00:19Z) - Gradient Descent in Neural Networks as Sequential Learning in RKBS [63.011641517977644]
We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights.
We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning.
arXiv Detail & Related papers (2023-02-01T03:18:07Z) - Parameter estimation for WMTI-Watson model of white matter using
encoder-decoder recurrent neural network [0.0]
In this study, we evaluate the performance of NLLS, the RNN-based method and a multilayer perceptron (MLP) on datasets rat and human brain.
We showed that the proposed RNN-based fitting approach had the advantage of highly reduced computation time over NLLS.
arXiv Detail & Related papers (2022-03-01T16:33:15Z) - AxonNet: A self-supervised Deep Neural Network for Intravoxel Structure
Estimation from DW-MRI [0.12183405753834559]
We show that neural networks (DNNs) have the potential to extract information from diffusion-weighted signals to reconstruct cerebral tracts.
We present two DNN models: one that estimates the axonal structure in the form of a voxel and the other to calculate the structure of the central voxel.
arXiv Detail & Related papers (2021-03-19T20:11:03Z) - Selfish Sparse RNN Training [13.165729746380816]
We propose an approach to train sparse RNNs with a fixed parameter count in one single run, without compromising performance.
We achieve state-of-the-art sparse training results with various datasets on Penn TreeBank and Wikitext-2.
arXiv Detail & Related papers (2021-01-22T10:45:40Z) - Deep Networks for Direction-of-Arrival Estimation in Low SNR [89.45026632977456]
We introduce a Convolutional Neural Network (CNN) that is trained from mutli-channel data of the true array manifold matrix.
We train a CNN in the low-SNR regime to predict DoAs across all SNRs.
Our robust solution can be applied in several fields, ranging from wireless array sensors to acoustic microphones or sonars.
arXiv Detail & Related papers (2020-11-17T12:52:18Z) - Finite Versus Infinite Neural Networks: an Empirical Study [69.07049353209463]
kernel methods outperform fully-connected finite-width networks.
Centered and ensembled finite networks have reduced posterior variance.
Weight decay and the use of a large learning rate break the correspondence between finite and infinite networks.
arXiv Detail & Related papers (2020-07-31T01:57:47Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Sound Event Detection with Depthwise Separable and Dilated Convolutions [23.104644393058123]
State-of-the-art sound event detection (SED) methods usually employ a series of convolutional neural networks (CNNs) to extract useful features from the input audio signal.
We propose the replacement of the CNNs with depthwise separable convolutions and the replacement of the RNNs with dilated convolutions.
We achieve a reduction of the amount of parameters by 85% and average training time per epoch by 78%, and an increase the average frame-wise F1 score and reduction of the average error rate by 4.6% and 3.8%, respectively.
arXiv Detail & Related papers (2020-02-02T19:50:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.