Score-based Self-supervised MRI Denoising
- URL: http://arxiv.org/abs/2505.05631v1
- Date: Thu, 08 May 2025 20:27:13 GMT
- Title: Score-based Self-supervised MRI Denoising
- Authors: Jiachen Tu, Yaokun Shi, Fan Lam,
- Abstract summary: Supervised learning based denoising approaches have achieved impressive performance but require high signal-to-noise ratio (SNR) labels.<n>Self-supervised learning holds promise to address the label scarcity issue, but existing self-supervised denoising methods tend to oversmooth fine spatial features.<n>We introduce Corruption2Self (C2S), a novel score-based self-supervised framework for MRI denoising.
- Score: 1.6385815610837167
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Magnetic resonance imaging (MRI) is a powerful noninvasive diagnostic imaging tool that provides unparalleled soft tissue contrast and anatomical detail. Noise contamination, especially in accelerated and/or low-field acquisitions, can significantly degrade image quality and diagnostic accuracy. Supervised learning based denoising approaches have achieved impressive performance but require high signal-to-noise ratio (SNR) labels, which are often unavailable. Self-supervised learning holds promise to address the label scarcity issue, but existing self-supervised denoising methods tend to oversmooth fine spatial features and often yield inferior performance than supervised methods. We introduce Corruption2Self (C2S), a novel score-based self-supervised framework for MRI denoising. At the core of C2S is a generalized denoising score matching (GDSM) loss, which extends denoising score matching to work directly with noisy observations by modeling the conditional expectation of higher-SNR images given further corrupted observations. This allows the model to effectively learn denoising across multiple noise levels directly from noisy data. Additionally, we incorporate a reparameterization of noise levels to stabilize training and enhance convergence, and introduce a detail refinement extension to balance noise reduction with the preservation of fine spatial features. Moreover, C2S can be extended to multi-contrast denoising by leveraging complementary information across different MRI contrasts. We demonstrate that our method achieves state-of-the-art performance among self-supervised methods and competitive results compared to supervised counterparts across varying noise conditions and MRI contrasts on the M4Raw and fastMRI dataset.
Related papers
- Sparse Mixture-of-Experts for Non-Uniform Noise Reduction in MRI Images [4.1738581761446145]
We introduce a novel approach leveraging a sparse mixture-of-experts framework for MRI image denoising.<n>Each expert is a specialized denoising convolutional neural network fine-tuned to target specific noise characteristics associated with different image regions.<n>Our method demonstrates superior performance over state-of-the-art denoising techniques on both synthetic and real-world MRI datasets.
arXiv Detail & Related papers (2025-01-24T03:04:44Z) - Robust multi-coil MRI reconstruction via self-supervised denoising [4.6017417632210655]
We study the effect of incorporating self-supervised denoising as a pre-processing step for training deep learning (DL) based reconstruction methods on data corrupted by Gaussian noise.<n>We observed that self-supervised denoising enhances the quality and efficiency of MRI reconstructions across various scenarios.
arXiv Detail & Related papers (2024-11-19T23:17:09Z) - DiffCMR: Fast Cardiac MRI Reconstruction with Diffusion Probabilistic
Models [11.068359534951783]
DiffCMR perceives conditioning signals from the under-sampled MRI image slice and generates its corresponding fully-sampled MRI image slice.
We validate DiffCMR with cine reconstruction and T1/T2 mapping tasks on MICCAI 2023 Cardiac MRI Reconstruction Challenge dataset.
Results show that our method achieves state-of-the-art performance, exceeding previous methods by a significant margin.
arXiv Detail & Related papers (2023-12-08T06:11:21Z) - Denoising Simulated Low-Field MRI (70mT) using Denoising Autoencoders
(DAE) and Cycle-Consistent Generative Adversarial Networks (Cycle-GAN) [68.8204255655161]
Cycle Consistent Generative Adversarial Network (GAN) is implemented to yield high-field, high resolution, high signal-to-noise ratio (SNR) Magnetic Resonance Imaging (MRI) images.
Images were utilized to train a Denoising Autoencoder (DAE) and a Cycle-GAN, with paired and unpaired cases.
This work demonstrates the use of a generative deep learning model that can outperform classical DAEs to improve low-field MRI images and does not require image pairs.
arXiv Detail & Related papers (2023-07-12T00:01:00Z) - Realistic Noise Synthesis with Diffusion Models [44.404059914652194]
Deep denoising models require extensive real-world training data, which is challenging to acquire.<n>We propose a novel Realistic Noise Synthesis Diffusor (RNSD) method using diffusion models to address these challenges.
arXiv Detail & Related papers (2023-05-23T12:56:01Z) - Advancing Unsupervised Low-light Image Enhancement: Noise Estimation, Illumination Interpolation, and Self-Regulation [55.07472635587852]
Low-Light Image Enhancement (LLIE) techniques have made notable advancements in preserving image details and enhancing contrast.
These approaches encounter persistent challenges in efficiently mitigating dynamic noise and accommodating diverse low-light scenarios.
We first propose a method for estimating the noise level in low light images in a quick and accurate way.
We then devise a Learnable Illumination Interpolator (LII) to satisfy general constraints between illumination and input.
arXiv Detail & Related papers (2023-05-17T13:56:48Z) - DDM$^2$: Self-Supervised Diffusion MRI Denoising with Generative
Diffusion Models [0.3149883354098941]
We propose a self-supervised denoising method for MRI denoising using diffusion denoising generative models.
Our framework integrates statistic-based denoising theory into diffusion models and performs denoising through conditional generation.
arXiv Detail & Related papers (2023-02-06T18:56:39Z) - The role of noise in denoising models for anomaly detection in medical
images [62.0532151156057]
Pathological brain lesions exhibit diverse appearance in brain images.
Unsupervised anomaly detection approaches have been proposed using only normal data for training.
We show that optimization of the spatial resolution and magnitude of the noise improves the performance of different model training regimes.
arXiv Detail & Related papers (2023-01-19T21:39:38Z) - Noise2Contrast: Multi-Contrast Fusion Enables Self-Supervised
Tomographic Image Denoising [6.314790045423454]
Noise2Contrast combines information from multiple measured image contrasts to train a denoising model.
We stack denoising with domain-transfer operators to utilize the independent noise realizations of different image contrasts to derive a self-supervised loss.
Our experiments on different real measured data sets indicate that Noise2Contrast generalizes to other multi-contrast imaging modalities.
arXiv Detail & Related papers (2022-12-09T13:03:24Z) - Zero-shot Blind Image Denoising via Implicit Neural Representations [77.79032012459243]
We propose an alternative denoising strategy that leverages the architectural inductive bias of implicit neural representations (INRs)
We show that our method outperforms existing zero-shot denoising methods under an extensive set of low-noise or real-noise scenarios.
arXiv Detail & Related papers (2022-04-05T12:46:36Z) - Noise2Same: Optimizing A Self-Supervised Bound for Image Denoising [54.730707387866076]
We introduce Noise2Same, a novel self-supervised denoising framework.
In particular, Noise2Same requires neither J-invariance nor extra information about the noise model.
Our results show that our Noise2Same remarkably outperforms previous self-supervised denoising methods.
arXiv Detail & Related papers (2020-10-22T18:12:26Z) - Improving Blind Spot Denoising for Microscopy [73.94017852757413]
We present a novel way to improve the quality of self-supervised denoising.
We assume the clean image to be the result of a convolution with a point spread function (PSF) and explicitly include this operation at the end of our neural network.
arXiv Detail & Related papers (2020-08-19T13:06:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.