Solving Inverse Problems with Score-Based Generative Priors learned from
Noisy Data
- URL: http://arxiv.org/abs/2305.01166v1
- Date: Tue, 2 May 2023 02:51:01 GMT
- Title: Solving Inverse Problems with Score-Based Generative Priors learned from
Noisy Data
- Authors: Asad Aali, Marius Arvinte, Sidharth Kumar, Jonathan I. Tamir
- Abstract summary: SURE-Score is an approach for learning score-based generative models using training samples corrupted by additive Gaussian noise.
We demonstrate the generality of SURE-Score by learning priors and applying posterior sampling to ill-posed inverse problems in two practical applications.
- Score: 1.7969777786551424
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present SURE-Score: an approach for learning score-based generative models
using training samples corrupted by additive Gaussian noise. When a large
training set of clean samples is available, solving inverse problems via
score-based (diffusion) generative models trained on the underlying
fully-sampled data distribution has recently been shown to outperform
end-to-end supervised deep learning. In practice, such a large collection of
training data may be prohibitively expensive to acquire in the first place. In
this work, we present an approach for approximately learning a score-based
generative model of the clean distribution, from noisy training data. We
formulate and justify a novel loss function that leverages Stein's unbiased
risk estimate to jointly denoise the data and learn the score function via
denoising score matching, while using only the noisy samples. We demonstrate
the generality of SURE-Score by learning priors and applying posterior sampling
to ill-posed inverse problems in two practical applications from different
domains: compressive wireless multiple-input multiple-output channel estimation
and accelerated 2D multi-coil magnetic resonance imaging reconstruction, where
we demonstrate competitive reconstruction performance when learning at
signal-to-noise ratio values of 0 and 10 dB, respectively.
Related papers
- Dimension-free Score Matching and Time Bootstrapping for Diffusion Models [11.743167854433306]
Diffusion models generate samples by estimating the score function of the target distribution at various noise levels.
In this work, we establish the first (nearly) dimension-free sample bounds complexity for learning these score functions.
A key aspect of our analysis is the use of a single function approximator to jointly estimate scores across noise levels.
arXiv Detail & Related papers (2025-02-14T18:32:22Z) - Distributional Diffusion Models with Scoring Rules [83.38210785728994]
Diffusion models generate high-quality synthetic data.
generating high-quality outputs requires many discretization steps.
We propose to accomplish sample generation by learning the posterior em distribution of clean data samples.
arXiv Detail & Related papers (2025-02-04T16:59:03Z) - Sampling Binary Data by Denoising through Score Functions [2.9465623430708905]
Tweedie-Miyasawa formula (TMF) is key ingredient in score-based generative models.
TMF ties these together via the score function of noisy data.
We adopt Bernoulli noise, instead of Gaussian noise, as a smoothing device.
arXiv Detail & Related papers (2025-02-01T20:59:02Z) - The Unreasonable Effectiveness of Gaussian Score Approximation for Diffusion Models and its Applications [1.8416014644193066]
We compare learned neural scores to the scores of two kinds of analytically tractable distributions.
We claim that the learned neural score is dominated by its linear (Gaussian) approximation for moderate to high noise scales.
We show that this allows the skipping of the first 15-30% of sampling steps while maintaining high sample quality.
arXiv Detail & Related papers (2024-12-12T21:31:27Z) - Score-based Generative Models with Adaptive Momentum [40.84399531998246]
We propose an adaptive momentum sampling method to accelerate the transforming process.
We show that our method can produce more faithful images/graphs in small sampling steps with 2 to 5 times speed up.
arXiv Detail & Related papers (2024-05-22T15:20:27Z) - Learning with Noisy Foundation Models [95.50968225050012]
This paper is the first work to comprehensively understand and analyze the nature of noise in pre-training datasets.
We propose a tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization.
arXiv Detail & Related papers (2024-03-11T16:22:41Z) - Noisy Self-Training with Synthetic Queries for Dense Retrieval [49.49928764695172]
We introduce a novel noisy self-training framework combined with synthetic queries.
Experimental results show that our method improves consistently over existing methods.
Our method is data efficient and outperforms competitive baselines.
arXiv Detail & Related papers (2023-11-27T06:19:50Z) - Understanding and Mitigating the Label Noise in Pre-training on
Downstream Tasks [91.15120211190519]
This paper aims to understand the nature of noise in pre-training datasets and to mitigate its impact on downstream tasks.
We propose a light-weight black-box tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise.
arXiv Detail & Related papers (2023-09-29T06:18:15Z) - Improving the Robustness of Summarization Models by Detecting and
Removing Input Noise [50.27105057899601]
We present a large empirical study quantifying the sometimes severe loss in performance from different types of input noise for a range of datasets and model sizes.
We propose a light-weight method for detecting and removing such noise in the input during model inference without requiring any training, auxiliary models, or even prior knowledge of the type of noise.
arXiv Detail & Related papers (2022-12-20T00:33:11Z) - Jo-SRC: A Contrastive Approach for Combating Noisy Labels [58.867237220886885]
We propose a noise-robust approach named Jo-SRC (Joint Sample Selection and Model Regularization based on Consistency)
Specifically, we train the network in a contrastive learning manner. Predictions from two different views of each sample are used to estimate its "likelihood" of being clean or out-of-distribution.
arXiv Detail & Related papers (2021-03-24T07:26:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.