Inference Stage Denoising for Undersampled MRI Reconstruction
- URL: http://arxiv.org/abs/2402.08692v1
- Date: Mon, 12 Feb 2024 12:50:10 GMT
- Title: Inference Stage Denoising for Undersampled MRI Reconstruction
- Authors: Yuyang Xue, Chen Qin, Sotirios A. Tsaftaris
- Abstract summary: Reconstruction of magnetic resonance imaging (MRI) data has been positively affected by deep learning.
A key challenge remains: to improve generalisation to distribution shifts between the training and testing data.
- Score: 13.8086726938161
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reconstruction of magnetic resonance imaging (MRI) data has been positively
affected by deep learning. A key challenge remains: to improve generalisation
to distribution shifts between the training and testing data. Most approaches
aim to address this via inductive design or data augmentation. However, they
can be affected by misleading data, e.g. random noise, and cases where the
inference stage data do not match assumptions in the modelled shifts. In this
work, by employing a conditional hyperparameter network, we eliminate the need
of augmentation, yet maintain robust performance under various levels of
Gaussian noise. We demonstrate that our model withstands various input noise
levels while producing high-definition reconstructions during the test stage.
Moreover, we present a hyperparameter sampling strategy that accelerates the
convergence of training. Our proposed method achieves the highest accuracy and
image quality in all settings compared to baseline methods.
Related papers
- Self-Supervised Pre-training Tasks for an fMRI Time-series Transformer in Autism Detection [3.665816629105171]
Autism Spectrum Disorder (ASD) is a neurodevelopmental condition that encompasses a wide variety of symptoms and degrees of impairment.
We have developed a transformer-based self-supervised framework that directly analyzes time-series fMRI data without computing functional connectivity.
We show that randomly masking entire ROIs gives better model performance than randomly masking time points in the pre-training step.
arXiv Detail & Related papers (2024-09-18T20:29:23Z) - Learning with Noisy Foundation Models [95.50968225050012]
This paper is the first work to comprehensively understand and analyze the nature of noise in pre-training datasets.
We propose a tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization.
arXiv Detail & Related papers (2024-03-11T16:22:41Z) - Noise Level Adaptive Diffusion Model for Robust Reconstruction of Accelerated MRI [34.361078452552945]
Real-world MRI acquisitions already contain inherent noise due to thermal fluctuations.
We propose a posterior sampling strategy with a novel NoIse Level Adaptive Data Consistency (Nila-DC) operation.
Our method surpasses the state-of-the-art MRI reconstruction methods, and is highly robust against various noise levels.
arXiv Detail & Related papers (2024-03-08T12:07:18Z) - DASA: Difficulty-Aware Semantic Augmentation for Speaker Verification [55.306583814017046]
We present a novel difficulty-aware semantic augmentation (DASA) approach for speaker verification.
DASA generates diversified training samples in speaker embedding space with negligible extra computing cost.
The best result achieves a 14.6% relative reduction in EER metric on CN-Celeb evaluation set.
arXiv Detail & Related papers (2023-10-18T17:07:05Z) - SMRD: SURE-based Robust MRI Reconstruction with Diffusion Models [76.43625653814911]
Diffusion models have gained popularity for accelerated MRI reconstruction due to their high sample quality.
They can effectively serve as rich data priors while incorporating the forward model flexibly at inference time.
We introduce SURE-based MRI Reconstruction with Diffusion models (SMRD) to enhance robustness during testing.
arXiv Detail & Related papers (2023-10-03T05:05:35Z) - DiffSED: Sound Event Detection with Denoising Diffusion [70.18051526555512]
We reformulate the SED problem by taking a generative learning perspective.
Specifically, we aim to generate sound temporal boundaries from noisy proposals in a denoising diffusion process.
During training, our model learns to reverse the noising process by converting noisy latent queries to the groundtruth versions.
arXiv Detail & Related papers (2023-08-14T17:29:41Z) - Realistic Noise Synthesis with Diffusion Models [68.48859665320828]
Deep image denoising models often rely on large amount of training data for the high quality performance.
We propose a novel method that synthesizes realistic noise using diffusion models, namely Realistic Noise Synthesize Diffusor (RNSD)
RNSD can incorporate guided multiscale content, such as more realistic noise with spatial correlations can be generated at multiple frequencies.
arXiv Detail & Related papers (2023-05-23T12:56:01Z) - Progressive Subsampling for Oversampled Data -- Application to
Quantitative MRI [3.9783356854895024]
We present PROSUB, a deep learning based, automated methodology that subsamples an oversampled data set.
We build upon a recent dual-network approach that won the MICCAI MUlti-DIffusion (MUDI) quantitative MRI measurement sampling-reconstruction challenge.
We show PROSUB outperforms the winner of the MUDI challenge sub-tasks and qualitative improvements on downstream processes useful for clinical applications.
arXiv Detail & Related papers (2022-03-17T11:44:07Z) - Unsupervised MRI Reconstruction via Zero-Shot Learned Adversarial
Transformers [0.0]
We introduce a novel unsupervised MRI reconstruction method based on zero-Shot Learned Adrial TransformERs (SLATER)
A zero-shot reconstruction is performed on undersampled test data, where inference is performed by optimizing network parameters.
Experiments on brain MRI datasets clearly demonstrate the superior performance of SLATER against several state-of-the-art unsupervised methods.
arXiv Detail & Related papers (2021-05-15T02:01:21Z) - Learning Energy-Based Models by Diffusion Recovery Likelihood [61.069760183331745]
We present a diffusion recovery likelihood method to tractably learn and sample from a sequence of energy-based models.
After training, synthesized images can be generated by the sampling process that initializes from Gaussian white noise distribution.
On unconditional CIFAR-10 our method achieves FID 9.58 and inception score 8.30, superior to the majority of GANs.
arXiv Detail & Related papers (2020-12-15T07:09:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.