Solving Inverse Problems with Score-Based Generative Priors learned from
Noisy Data
- URL: http://arxiv.org/abs/2305.01166v1
- Date: Tue, 2 May 2023 02:51:01 GMT
- Title: Solving Inverse Problems with Score-Based Generative Priors learned from
Noisy Data
- Authors: Asad Aali, Marius Arvinte, Sidharth Kumar, Jonathan I. Tamir
- Abstract summary: SURE-Score is an approach for learning score-based generative models using training samples corrupted by additive Gaussian noise.
We demonstrate the generality of SURE-Score by learning priors and applying posterior sampling to ill-posed inverse problems in two practical applications.
- Score: 1.7969777786551424
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present SURE-Score: an approach for learning score-based generative models
using training samples corrupted by additive Gaussian noise. When a large
training set of clean samples is available, solving inverse problems via
score-based (diffusion) generative models trained on the underlying
fully-sampled data distribution has recently been shown to outperform
end-to-end supervised deep learning. In practice, such a large collection of
training data may be prohibitively expensive to acquire in the first place. In
this work, we present an approach for approximately learning a score-based
generative model of the clean distribution, from noisy training data. We
formulate and justify a novel loss function that leverages Stein's unbiased
risk estimate to jointly denoise the data and learn the score function via
denoising score matching, while using only the noisy samples. We demonstrate
the generality of SURE-Score by learning priors and applying posterior sampling
to ill-posed inverse problems in two practical applications from different
domains: compressive wireless multiple-input multiple-output channel estimation
and accelerated 2D multi-coil magnetic resonance imaging reconstruction, where
we demonstrate competitive reconstruction performance when learning at
signal-to-noise ratio values of 0 and 10 dB, respectively.
Related papers
- Score-based Generative Models with Adaptive Momentum [40.84399531998246]
We propose an adaptive momentum sampling method to accelerate the transforming process.
We show that our method can produce more faithful images/graphs in small sampling steps with 2 to 5 times speed up.
arXiv Detail & Related papers (2024-05-22T15:20:27Z) - Learning with Noisy Foundation Models [95.50968225050012]
This paper is the first work to comprehensively understand and analyze the nature of noise in pre-training datasets.
We propose a tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization.
arXiv Detail & Related papers (2024-03-11T16:22:41Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Noisy Self-Training with Synthetic Queries for Dense Retrieval [49.49928764695172]
We introduce a novel noisy self-training framework combined with synthetic queries.
Experimental results show that our method improves consistently over existing methods.
Our method is data efficient and outperforms competitive baselines.
arXiv Detail & Related papers (2023-11-27T06:19:50Z) - Understanding and Mitigating the Label Noise in Pre-training on
Downstream Tasks [91.15120211190519]
This paper aims to understand the nature of noise in pre-training datasets and to mitigate its impact on downstream tasks.
We propose a light-weight black-box tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise.
arXiv Detail & Related papers (2023-09-29T06:18:15Z) - Failure-informed adaptive sampling for PINNs, Part II: combining with
re-sampling and subset simulation [6.9476677480730125]
In this work, we present two novel extensions to FI-PINNs.
The first extension consist in combining with a re-sampling technique, so that the new algorithm can maintain a constant training size.
The second extension is to present the subset simulation algorithm as the posterior model for estimating the error indicator.
arXiv Detail & Related papers (2023-02-03T04:05:06Z) - Improving the Robustness of Summarization Models by Detecting and
Removing Input Noise [50.27105057899601]
We present a large empirical study quantifying the sometimes severe loss in performance from different types of input noise for a range of datasets and model sizes.
We propose a light-weight method for detecting and removing such noise in the input during model inference without requiring any training, auxiliary models, or even prior knowledge of the type of noise.
arXiv Detail & Related papers (2022-12-20T00:33:11Z) - Score-based diffusion models for accelerated MRI [35.3148116010546]
We introduce a way to sample data from a conditional distribution given the measurements, such that the model can be readily used for solving inverse problems in imaging.
Our model requires magnitude images only for training, and yet is able to reconstruct complex-valued data, and even extends to parallel imaging.
arXiv Detail & Related papers (2021-10-08T08:42:03Z) - Jo-SRC: A Contrastive Approach for Combating Noisy Labels [58.867237220886885]
We propose a noise-robust approach named Jo-SRC (Joint Sample Selection and Model Regularization based on Consistency)
Specifically, we train the network in a contrastive learning manner. Predictions from two different views of each sample are used to estimate its "likelihood" of being clean or out-of-distribution.
arXiv Detail & Related papers (2021-03-24T07:26:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.