Score-based Generative Priors Guided Model-driven Network for MRI Reconstruction
- URL: http://arxiv.org/abs/2405.02958v2
- Date: Mon, 15 Jul 2024 11:54:32 GMT
- Title: Score-based Generative Priors Guided Model-driven Network for MRI Reconstruction
- Authors: Xiaoyu Qiao, Weisheng Li, Bin Xiao, Yuping Huang, Lijian Yang,
- Abstract summary: We propose a novel workflow where naive SMLD samples serve as additional priors to guide model-driven network training.
First, we adopted a pretrained score network to generate samples as preliminary guidance images (PGI)
Second, we designed a denoising module (DM) in the second step to coarsely eliminate artifacts and noises from PGIs.
Third, we designed a model-driven network guided by denoised PGIs to further recover fine details.
- Score: 14.53268880380804
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Score matching with Langevin dynamics (SMLD) method has been successfully applied to accelerated MRI. However, the hyperparameters in the sampling process require subtle tuning, otherwise the results can be severely corrupted by hallucination artifacts, especially with out-of-distribution test data. To address the limitations, we proposed a novel workflow where naive SMLD samples serve as additional priors to guide model-driven network training. First, we adopted a pretrained score network to generate samples as preliminary guidance images (PGI), obviating the need for network retraining, parameter tuning and in-distribution test data. Although PGIs are corrupted by hallucination artifacts, we believe they can provide extra information through effective denoising steps to facilitate reconstruction. Therefore, we designed a denoising module (DM) in the second step to coarsely eliminate artifacts and noises from PGIs. The features are extracted from a score-based information extractor (SIE) and a cross-domain information extractor (CIE), which directly map to the noise patterns. Third, we designed a model-driven network guided by denoised PGIs (DGIs) to further recover fine details. DGIs are densely connected with intermediate reconstructions in each cascade to enrich the information and are periodically updated to provide more accurate guidance. Our experiments on different datasets reveal that despite the low average quality of PGIs, the proposed workflow can effectively extract valuable information to guide the network training, even with severely reduced training data and sampling steps. Our method outperforms other cutting-edge techniques by effectively mitigating hallucination artifacts, yielding robust and high-quality reconstruction results.
Related papers
- DetDiffusion: Synergizing Generative and Perceptive Models for Enhanced Data Generation and Perception [78.26734070960886]
Current perceptive models heavily depend on resource-intensive datasets.
We introduce perception-aware loss (P.A. loss) through segmentation, improving both quality and controllability.
Our method customizes data augmentation by extracting and utilizing perception-aware attribute (P.A. Attr) during generation.
arXiv Detail & Related papers (2024-03-20T04:58:03Z) - Learning with Noisy Foundation Models [95.50968225050012]
This paper is the first work to comprehensively understand and analyze the nature of noise in pre-training datasets.
We propose a tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization.
arXiv Detail & Related papers (2024-03-11T16:22:41Z) - Inference Stage Denoising for Undersampled MRI Reconstruction [13.8086726938161]
Reconstruction of magnetic resonance imaging (MRI) data has been positively affected by deep learning.
A key challenge remains: to improve generalisation to distribution shifts between the training and testing data.
arXiv Detail & Related papers (2024-02-12T12:50:10Z) - Noisy Self-Training with Synthetic Queries for Dense Retrieval [49.49928764695172]
We introduce a novel noisy self-training framework combined with synthetic queries.
Experimental results show that our method improves consistently over existing methods.
Our method is data efficient and outperforms competitive baselines.
arXiv Detail & Related papers (2023-11-27T06:19:50Z) - Understanding and Mitigating the Label Noise in Pre-training on
Downstream Tasks [91.15120211190519]
This paper aims to understand the nature of noise in pre-training datasets and to mitigate its impact on downstream tasks.
We propose a light-weight black-box tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise.
arXiv Detail & Related papers (2023-09-29T06:18:15Z) - Improving the Robustness of Summarization Models by Detecting and
Removing Input Noise [50.27105057899601]
We present a large empirical study quantifying the sometimes severe loss in performance from different types of input noise for a range of datasets and model sizes.
We propose a light-weight method for detecting and removing such noise in the input during model inference without requiring any training, auxiliary models, or even prior knowledge of the type of noise.
arXiv Detail & Related papers (2022-12-20T00:33:11Z) - ScoreMix: A Scalable Augmentation Strategy for Training GANs with
Limited Data [93.06336507035486]
Generative Adversarial Networks (GANs) typically suffer from overfitting when limited training data is available.
We present ScoreMix, a novel and scalable data augmentation approach for various image synthesis tasks.
arXiv Detail & Related papers (2022-10-27T02:55:15Z) - A theoretical framework for self-supervised MR image reconstruction
using sub-sampling via variable density Noisier2Noise [0.0]
We use the Noisier2Noise framework to analytically explain the performance of Self-samplingd Learning via Data UnderSupervise.
We propose partitioning the sampling set so that the subsets have the same type of distribution as the original sampling mask.
arXiv Detail & Related papers (2022-05-20T16:19:23Z) - Hard Sample Aware Noise Robust Learning for Histopathology Image
Classification [4.75542005200538]
We introduce a novel hard sample aware noise robust learning method for histopathology image classification.
To distinguish the informative hard samples from the harmful noisy ones, we build an easy/hard/noisy (EHN) detection model.
We propose a noise suppressing and hard enhancing (NSHE) scheme to train the noise robust model.
arXiv Detail & Related papers (2021-12-05T11:07:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.