Patch2Self: Denoising Diffusion MRI with Self-Supervised Learning
- URL: http://arxiv.org/abs/2011.01355v1
- Date: Mon, 2 Nov 2020 22:27:25 GMT
- Title: Patch2Self: Denoising Diffusion MRI with Self-Supervised Learning
- Authors: Shreyas Fadnavis, Joshua Batson, Eleftherios Garyfallidis
- Abstract summary: We introduce a self-supervised learning method for denoising DWI data, Patch2Self, which uses the entire volume to learn a full-rank locally linear denoiser for that volume.
We demonstrate the effectiveness of Patch2Self via quantitative and qualitative improvements in microstructure modeling, tracking (via fiber bundle coherency) and model estimation relative to other unsupervised methods on real and simulated data.
- Score: 7.090165638014331
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Diffusion-weighted magnetic resonance imaging (DWI) is the only noninvasive
method for quantifying microstructure and reconstructing white-matter pathways
in the living human brain. Fluctuations from multiple sources create
significant additive noise in DWI data which must be suppressed before
subsequent microstructure analysis. We introduce a self-supervised learning
method for denoising DWI data, Patch2Self, which uses the entire volume to
learn a full-rank locally linear denoiser for that volume. By taking advantage
of the oversampled q-space of DWI data, Patch2Self can separate structure from
noise without requiring an explicit model for either. We demonstrate the
effectiveness of Patch2Self via quantitative and qualitative improvements in
microstructure modeling, tracking (via fiber bundle coherency) and model
estimation relative to other unsupervised methods on real and simulated data.
Related papers
- Restoration Score Distillation: From Corrupted Diffusion Pretraining to One-Step High-Quality Generation [82.39763984380625]
We propose textitRestoration Score Distillation (RSD), a principled generalization of Denoising Score Distillation (DSD)<n>RSD accommodates a broader range of corruption types, such as blurred, incomplete, or low-resolution images.<n>It consistently surpasses its teacher model across diverse restoration tasks on both natural and scientific datasets.
arXiv Detail & Related papers (2025-05-19T17:21:03Z) - Noisier2Inverse: Self-Supervised Learning for Image Reconstruction with Correlated Noise [1.099532646524593]
Noisier2Inverse is a correction-free self-supervised deep learning approach for general inverse prob- lems.
We numerically demonstrate that our method clearly outperforms previous self-supervised approaches that account for correlated noise.
arXiv Detail & Related papers (2025-03-25T08:59:11Z) - Denoising Score Distillation: From Noisy Diffusion Pretraining to One-Step High-Quality Generation [82.39763984380625]
We introduce denoising score distillation (DSD), a surprisingly effective and novel approach for training high-quality generative models from low-quality data.
DSD pretrains a diffusion model exclusively on noisy, corrupted samples and then distills it into a one-step generator capable of producing refined, clean outputs.
arXiv Detail & Related papers (2025-03-10T17:44:46Z) - Self-Supervised Diffusion MRI Denoising via Iterative and Stable Refinement [20.763457281944834]
Di-Fusion is a fully self-supervised denoising method that leverages the latter diffusion steps and an adaptive sampling process.
Our experiments on real and simulated data demonstrate that Di-Fusion achieves state-of-the-art performance in microstructure modeling, tractography tracking, and other downstream tasks.
arXiv Detail & Related papers (2025-01-23T10:01:33Z) - Enhancing Deep Learning-Driven Multi-Coil MRI Reconstruction via Self-Supervised Denoising [4.6017417632210655]
Self-supervised denoising is a pre-processing step for training deep learning (DL) based reconstruction methods.
We show that denoising is an essential pre-processing technique capable of improving the efficacy of DL-based MRI reconstruction methods.
arXiv Detail & Related papers (2024-11-19T23:17:09Z) - Learning with Noisy Foundation Models [95.50968225050012]
This paper is the first work to comprehensively understand and analyze the nature of noise in pre-training datasets.
We propose a tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization.
arXiv Detail & Related papers (2024-03-11T16:22:41Z) - Inference Stage Denoising for Undersampled MRI Reconstruction [13.8086726938161]
Reconstruction of magnetic resonance imaging (MRI) data has been positively affected by deep learning.
A key challenge remains: to improve generalisation to distribution shifts between the training and testing data.
arXiv Detail & Related papers (2024-02-12T12:50:10Z) - Exploring Diffusion Time-steps for Unsupervised Representation Learning [72.43246871893936]
We build a theoretical framework that connects the diffusion time-steps and the hidden attributes.
On CelebA, FFHQ, and Bedroom datasets, the learned feature significantly improves classification.
arXiv Detail & Related papers (2024-01-21T08:35:25Z) - Understanding and Mitigating the Label Noise in Pre-training on
Downstream Tasks [91.15120211190519]
This paper aims to understand the nature of noise in pre-training datasets and to mitigate its impact on downstream tasks.
We propose a light-weight black-box tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise.
arXiv Detail & Related papers (2023-09-29T06:18:15Z) - Data Augmentation for Seizure Prediction with Generative Diffusion Model [26.967247641926814]
Seizure prediction is of great importance to improve the life of patients.
The severe imbalance problem between preictal and interictal data still poses a great challenge.
Data augmentation is an intuitive way to solve this problem.
We propose a novel data augmentation method with diffusion model called DiffEEG.
arXiv Detail & Related papers (2023-06-14T05:44:53Z) - The role of noise in denoising models for anomaly detection in medical
images [62.0532151156057]
Pathological brain lesions exhibit diverse appearance in brain images.
Unsupervised anomaly detection approaches have been proposed using only normal data for training.
We show that optimization of the spatial resolution and magnitude of the noise improves the performance of different model training regimes.
arXiv Detail & Related papers (2023-01-19T21:39:38Z) - Improving the Robustness of Summarization Models by Detecting and
Removing Input Noise [50.27105057899601]
We present a large empirical study quantifying the sometimes severe loss in performance from different types of input noise for a range of datasets and model sizes.
We propose a light-weight method for detecting and removing such noise in the input during model inference without requiring any training, auxiliary models, or even prior knowledge of the type of noise.
arXiv Detail & Related papers (2022-12-20T00:33:11Z) - Denoising diffusion models for out-of-distribution detection [2.113925122479677]
We exploit the view of denoising probabilistic diffusion models (DDPM) as denoising autoencoders.
We use DDPMs to reconstruct an input that has been noised to a range of noise levels, and use the resulting multi-dimensional reconstruction error to classify out-of-distribution inputs.
arXiv Detail & Related papers (2022-11-14T20:35:11Z) - A theoretical framework for self-supervised MR image reconstruction
using sub-sampling via variable density Noisier2Noise [0.0]
We use the Noisier2Noise framework to analytically explain the performance of Self-samplingd Learning via Data UnderSupervise.
We propose partitioning the sampling set so that the subsets have the same type of distribution as the original sampling mask.
arXiv Detail & Related papers (2022-05-20T16:19:23Z) - Adaptive Multi-View ICA: Estimation of noise levels for optimal
inference [65.94843987207445]
Adaptive multiView ICA (AVICA) is a noisy ICA model where each view is a linear mixture of shared independent sources with additive noise on the sources.
On synthetic data, AVICA yields better sources estimates than other group ICA methods thanks to its explicit MMSE estimator.
On real magnetoencephalograpy (MEG) data, we provide evidence that the decomposition is less sensitive to sampling noise and that the noise variance estimates are biologically plausible.
arXiv Detail & Related papers (2021-02-22T13:10:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.