Self-Supervised Learning for MRI Reconstruction with a Parallel Network
Training Framework
- URL: http://arxiv.org/abs/2109.12502v1
- Date: Sun, 26 Sep 2021 06:09:56 GMT
- Title: Self-Supervised Learning for MRI Reconstruction with a Parallel Network
Training Framework
- Authors: Chen Hu, Cheng Li, Haifeng Wang, Qiegen Liu, Hairong Zheng and
Shanshan Wang
- Abstract summary: The proposed method is flexible and can be employed in any existing deep learning-based method.
The effectiveness of the method is evaluated on an open brain MRI dataset.
- Score: 24.46388892324129
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image reconstruction from undersampled k-space data plays an important role
in accelerating the acquisition of MR data, and a lot of deep learning-based
methods have been exploited recently. Despite the achieved inspiring results,
the optimization of these methods commonly relies on the fully-sampled
reference data, which are time-consuming and difficult to collect. To address
this issue, we propose a novel self-supervised learning method. Specifically,
during model optimization, two subsets are constructed by randomly selecting
part of k-space data from the undersampled data and then fed into two parallel
reconstruction networks to perform information recovery. Two reconstruction
losses are defined on all the scanned data points to enhance the network's
capability of recovering the frequency information. Meanwhile, to constrain the
learned unscanned data points of the network, a difference loss is designed to
enforce consistency between the two parallel networks. In this way, the
reconstruction model can be properly trained with only the undersampled data.
During the model evaluation, the undersampled data are treated as the inputs
and either of the two trained networks is expected to reconstruct the
high-quality results. The proposed method is flexible and can be employed in
any existing deep learning-based method. The effectiveness of the method is
evaluated on an open brain MRI dataset. Experimental results demonstrate that
the proposed self-supervised method can achieve competitive reconstruction
performance compared to the corresponding supervised learning method at high
acceleration rates (4 and 8). The code is publicly available at
\url{https://github.com/chenhu96/Self-Supervised-MRI-Reconstruction}.
Related papers
- JSSL: Joint Supervised and Self-supervised Learning for MRI Reconstruction [7.018974360061121]
Joint Supervised and Self-supervised Learning (JSSL) is a novel training approach for deep learning-based MRI reconstruction algorithms.
JSSL operates by simultaneously training a model in a self-supervised learning setting, using subsampled data from the target dataset.
We demonstrate JSSL's efficacy using subsampled prostate or cardiac MRI data as the target datasets.
arXiv Detail & Related papers (2023-11-27T14:23:36Z) - Noisy Self-Training with Synthetic Queries for Dense Retrieval [49.49928764695172]
We introduce a novel noisy self-training framework combined with synthetic queries.
Experimental results show that our method improves consistently over existing methods.
Our method is data efficient and outperforms competitive baselines.
arXiv Detail & Related papers (2023-11-27T06:19:50Z) - Iterative self-transfer learning: A general methodology for response
time-history prediction based on small dataset [0.0]
An iterative self-transfer learningmethod for training neural networks based on small datasets is proposed in this study.
The results show that the proposed method can improve the model performance by near an order of magnitude on small datasets.
arXiv Detail & Related papers (2023-06-14T18:48:04Z) - SelfCoLearn: Self-supervised collaborative learning for accelerating
dynamic MR imaging [15.575332712603172]
This paper proposes a self-supervised collaborative learning framework (SelfCoLearn) for accurate dynamic MR image reconstruction from undersampled k-space data.
The proposed framework is equipped with three important components, namely, dual-network collaborative learning, reunderampling data augmentation and a specially designed co-training loss.
Results show that our method possesses strong capabilities in capturing essential and inherent representations for direct reconstructions from the undersampled k-space data.
arXiv Detail & Related papers (2022-08-08T04:01:26Z) - PUERT: Probabilistic Under-sampling and Explicable Reconstruction
Network for CS-MRI [47.24613772568027]
Compressed Sensing MRI aims at reconstructing de-aliased images from sub-Nyquist sampling k-space data to accelerate MR Imaging.
We propose a novel end-to-end Probabilistic Under-sampling and Explicable Reconstruction neTwork, dubbed PUERT, to jointly optimize the sampling pattern and the reconstruction network.
Experiments on two widely used MRI datasets demonstrate that our proposed PUERT achieves state-of-the-art results in terms of both quantitative metrics and visual quality.
arXiv Detail & Related papers (2022-04-24T04:23:57Z) - PARCEL: Physics-based unsupervised contrastive representation learning
for parallel MR imaging [9.16860702327751]
This paper proposes a physics based unsupervised contrastive representation learning (PARCEL) method to speed up parallel MR imaging.
Specifically, PARCEL has three key ingredients to achieve direct deep learning from the undersampled k-space data.
A specially designed co-training loss is designed to guide the two networks to capture the inherent features and representations of the MR image.
arXiv Detail & Related papers (2022-02-03T10:09:19Z) - Over-and-Under Complete Convolutional RNN for MRI Reconstruction [57.95363471940937]
Recent deep learning-based methods for MR image reconstruction usually leverage a generic auto-encoder architecture.
We propose an Over-and-Under Complete Convolu?tional Recurrent Neural Network (OUCR), which consists of an overcomplete and an undercomplete Convolutional Recurrent Neural Network(CRNN)
The proposed method achieves significant improvements over the compressed sensing and popular deep learning-based methods with less number of trainable parameters.
arXiv Detail & Related papers (2021-06-16T15:56:34Z) - LoRD-Net: Unfolded Deep Detection Network with Low-Resolution Receivers [104.01415343139901]
We propose a deep detector entitled LoRD-Net for recovering information symbols from one-bit measurements.
LoRD-Net has a task-based architecture dedicated to recovering the underlying signal of interest.
We evaluate the proposed receiver architecture for one-bit signal recovery in wireless communications.
arXiv Detail & Related papers (2021-02-05T04:26:05Z) - Multi-task MR Imaging with Iterative Teacher Forcing and Re-weighted
Deep Learning [14.62432715967572]
We develop a re-weighted multi-task deep learning method to learn prior knowledge from the existing big dataset.
We then utilize them to assist simultaneous MR reconstruction and segmentation from the under-sampled k-space data.
Results show that the proposed method possesses encouraging capabilities for simultaneous and accurate MR reconstruction and segmentation.
arXiv Detail & Related papers (2020-11-27T09:08:05Z) - MetricUNet: Synergistic Image- and Voxel-Level Learning for Precise CT
Prostate Segmentation via Online Sampling [66.01558025094333]
We propose a two-stage framework, with the first stage to quickly localize the prostate region and the second stage to precisely segment the prostate.
We introduce a novel online metric learning module through voxel-wise sampling in the multi-task network.
Our method can effectively learn more representative voxel-level features compared with the conventional learning methods with cross-entropy or Dice loss.
arXiv Detail & Related papers (2020-05-15T10:37:02Z) - Learning to Hash with Graph Neural Networks for Recommender Systems [103.82479899868191]
Graph representation learning has attracted much attention in supporting high quality candidate search at scale.
Despite its effectiveness in learning embedding vectors for objects in the user-item interaction network, the computational costs to infer users' preferences in continuous embedding space are tremendous.
We propose a simple yet effective discrete representation learning framework to jointly learn continuous and discrete codes.
arXiv Detail & Related papers (2020-03-04T06:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.