A microstructure estimation Transformer inspired by sparse
representation for diffusion MRI
- URL: http://arxiv.org/abs/2205.06450v1
- Date: Fri, 13 May 2022 05:14:22 GMT
- Title: A microstructure estimation Transformer inspired by sparse
representation for diffusion MRI
- Authors: Tianshu Zheng, Cong Sun, Weihao Zheng, Wen Shi, Haotian Li, Yi Sun, Yi
Zhang, Guangbin Wang, Chuyang Ye, Dan Wu
- Abstract summary: We present a learning-based framework based on Transformer for dMRI-based microstructure estimation with downsampled q-space data.
The proposed method achieved up to 11.25 folds of acceleration in scan time and outperformed the other state-of-the-art learning-based methods.
- Score: 11.761543033212797
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion magnetic resonance imaging (dMRI) is an important tool in
characterizing tissue microstructure based on biophysical models, which are
complex and highly non-linear. Resolving microstructures with optimization
techniques is prone to estimation errors and requires dense sampling in the
q-space. Deep learning based approaches have been proposed to overcome these
limitations. Motivated by the superior performance of the Transformer, in this
work, we present a learning-based framework based on Transformer, namely, a
Microstructure Estimation Transformer with Sparse Coding (METSC) for dMRI-based
microstructure estimation with downsampled q-space data. To take advantage of
the Transformer while addressing its limitation in large training data
requirements, we explicitly introduce an inductive bias - model bias into the
Transformer using a sparse coding technique to facilitate the training process.
Thus, the METSC is composed with three stages, an embedding stage, a sparse
representation stage, and a mapping stage. The embedding stage is a
Transformer-based structure that encodes the signal to ensure the voxel is
represented effectively. In the sparse representation stage, a dictionary is
constructed by solving a sparse reconstruction problem that unfolds the
Iterative Hard Thresholding (IHT) process. The mapping stage is essentially a
decoder that computes the microstructural parameters from the output of the
second stage, based on the weighted sum of normalized dictionary coefficients
where the weights are also learned. We tested our framework on two dMRI models
with downsampled q-space data, including the intravoxel incoherent motion
(IVIM) model and the neurite orientation dispersion and density imaging (NODDI)
model. The proposed method achieved up to 11.25 folds of acceleration in scan
time and outperformed the other state-of-the-art learning-based methods.
Related papers
- A Neural Network Transformer Model for Composite Microstructure Homogenization [1.2277343096128712]
Homogenization methods, such as the Mori-Tanaka method, offer rapid homogenization for a wide range of constituent properties.
This paper illustrates a transformer neural network architecture that captures the knowledge of various microstructures.
The network predicts the history-dependent, non-linear, and homogenized stress-strain response.
arXiv Detail & Related papers (2023-04-16T19:57:52Z) - Transform Once: Efficient Operator Learning in Frequency Domain [69.74509540521397]
We study deep neural networks designed to harness the structure in frequency domain for efficient learning of long-range correlations in space or time.
This work introduces a blueprint for frequency domain learning through a single transform: transform once (T1)
arXiv Detail & Related papers (2022-11-26T01:56:05Z) - Machine Learning model for gas-liquid interface reconstruction in CFD
numerical simulations [59.84561168501493]
The volume of fluid (VoF) method is widely used in multi-phase flow simulations to track and locate the interface between two immiscible fluids.
A major bottleneck of the VoF method is the interface reconstruction step due to its high computational cost and low accuracy on unstructured grids.
We propose a machine learning enhanced VoF method based on Graph Neural Networks (GNN) to accelerate the interface reconstruction on general unstructured meshes.
arXiv Detail & Related papers (2022-07-12T17:07:46Z) - K-Space Transformer for Fast MRIReconstruction with Implicit
Representation [39.04792898427536]
We propose a Transformer-based framework for processing sparsely sampled MRI signals in k-space.
We adopt an implicit representation of spectrogram, treating spatial coordinates as inputs, and dynamically query the partially observed measurements.
To strive a balance between computational cost and reconstruction quality, we build a hierarchical structure with low-resolution and high-resolution decoders respectively.
arXiv Detail & Related papers (2022-06-14T16:06:15Z) - SUMD: Super U-shaped Matrix Decomposition Convolutional neural network
for Image denoising [0.0]
We introduce the matrix decomposition module(MD) in the network to establish the global context feature.
Inspired by the design of multi-stage progressive restoration of U-shaped architecture, we further integrate the MD module into the multi-branches.
Our model(SUMD) can produce comparable visual quality and accuracy results with Transformer-based methods.
arXiv Detail & Related papers (2022-04-11T04:38:34Z) - Active Phase-Encode Selection for Slice-Specific Fast MR Scanning Using
a Transformer-Based Deep Reinforcement Learning Framework [34.540525533018666]
We propose a light-weight transformer based deep reinforcement learning framework for generating high-quality slice-specific trajectory.
The proposed method is roughly 150 times faster and achieves significant improvement in reconstruction accuracy.
arXiv Detail & Related papers (2022-03-11T05:05:09Z) - CSformer: Bridging Convolution and Transformer for Compressive Sensing [65.22377493627687]
This paper proposes a hybrid framework that integrates the advantages of leveraging detailed spatial information from CNN and the global context provided by transformer for enhanced representation learning.
The proposed approach is an end-to-end compressive image sensing method, composed of adaptive sampling and recovery.
The experimental results demonstrate the effectiveness of the dedicated transformer-based architecture for compressive sensing.
arXiv Detail & Related papers (2021-12-31T04:37:11Z) - Mixed Precision Low-bit Quantization of Neural Network Language Models
for Speech Recognition [67.95996816744251]
State-of-the-art language models (LMs) represented by long-short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming increasingly complex and expensive for practical applications.
Current quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of LMs to quantization errors.
Novel mixed precision neural network LM quantization methods are proposed in this paper.
arXiv Detail & Related papers (2021-11-29T12:24:02Z) - Mixed Precision of Quantization of Transformer Language Models for
Speech Recognition [67.95996816744251]
State-of-the-art neural language models represented by Transformers are becoming increasingly complex and expensive for practical applications.
Current low-bit quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of the system to quantization errors.
The optimal local precision settings are automatically learned using two techniques.
Experiments conducted on Penn Treebank (PTB) and a Switchboard corpus trained LF-MMI TDNN system.
arXiv Detail & Related papers (2021-11-29T09:57:00Z) - Reference-based Magnetic Resonance Image Reconstruction Using Texture
Transforme [86.6394254676369]
We propose a novel Texture Transformer Module (TTM) for accelerated MRI reconstruction.
We formulate the under-sampled data and reference data as queries and keys in a transformer.
The proposed TTM can be stacked on prior MRI reconstruction approaches to further improve their performance.
arXiv Detail & Related papers (2021-11-18T03:06:25Z) - A novel Time-frequency Transformer and its Application in Fault
Diagnosis of Rolling Bearings [0.24214594180459362]
We propose a novel time-frequency Transformer (TFT) model inspired by the massive success of standard Transformer in sequence processing.
A new end-to-end fault diagnosis framework based on TFT is presented in this paper.
arXiv Detail & Related papers (2021-04-19T06:53:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.