Soft Diffusion: Score Matching for General Corruptions
- URL: http://arxiv.org/abs/2209.05442v1
- Date: Mon, 12 Sep 2022 17:45:03 GMT
- Title: Soft Diffusion: Score Matching for General Corruptions
- Authors: Giannis Daras, Mauricio Delbracio, Hossein Talebi, Alexandros G.
Dimakis, Peyman Milanfar
- Abstract summary: We propose a new objective called Soft Score Matching that provably learns the score function for any linear corruption process.
We show that our objective learns the gradient of the likelihood under suitable regularity conditions for the family of corruption processes.
Our method achieves state-of-the-art FID score $1.85$ on CelebA-64, outperforming all previous linear diffusion models.
- Score: 84.26037497404195
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We define a broader family of corruption processes that generalizes
previously known diffusion models. To reverse these general diffusions, we
propose a new objective called Soft Score Matching that provably learns the
score function for any linear corruption process and yields state of the art
results for CelebA. Soft Score Matching incorporates the degradation process in
the network and trains the model to predict a clean image that after corruption
matches the diffused observation. We show that our objective learns the
gradient of the likelihood under suitable regularity conditions for the family
of corruption processes. We further develop a principled way to select the
corruption levels for general diffusion processes and a novel sampling method
that we call Momentum Sampler. We evaluate our framework with the corruption
being Gaussian Blur and low magnitude additive noise. Our method achieves
state-of-the-art FID score $1.85$ on CelebA-64, outperforming all previous
linear diffusion models. We also show significant computational benefits
compared to vanilla denoising diffusion.
Related papers
- Corruption-Robust Linear Bandits: Minimax Optimality and Gap-Dependent Misspecification [17.288347876319126]
In linear bandits, how can a learner effectively learn when facing corrupted rewards?
We compare two types of corruptions commonly considered: strong corruption, where the corruption level depends on the action chosen by the learner, and weak corruption, where the corruption level does not depend on the action chosen by the learner.
For linear bandits, we fully characterize the gap between the minimax regret under strong and weak corruptions.
arXiv Detail & Related papers (2024-10-10T02:01:46Z) - A Mirror Descent-Based Algorithm for Corruption-Tolerant Distributed Gradient Descent [57.64826450787237]
We show how to analyze the behavior of distributed gradient descent algorithms in the presence of adversarial corruptions.
We show how to use ideas from (lazy) mirror descent to design a corruption-tolerant distributed optimization algorithm.
Experiments based on linear regression, support vector classification, and softmax classification on the MNIST dataset corroborate our theoretical findings.
arXiv Detail & Related papers (2024-07-19T08:29:12Z) - Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.
We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.
Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - Consistent Diffusion Meets Tweedie: Training Exact Ambient Diffusion Models with Noisy Data [74.2507346810066]
Ambient diffusion is a recently proposed framework for training diffusion models using corrupted data.
We present the first framework for training diffusion models that provably sample from the uncorrupted distribution given only noisy training data.
arXiv Detail & Related papers (2024-03-20T14:22:12Z) - Dynamic Batch Norm Statistics Update for Natural Robustness [5.366500153474747]
We propose a unified framework consisting of a corruption-detection model and BN statistics update.
Our results demonstrate about 8% and 4% accuracy improvement on CIFAR10-C and ImageNet-C.
arXiv Detail & Related papers (2023-10-31T17:20:30Z) - Image generation with shortest path diffusion [10.041144269046693]
We show that the Shortest Path Diffusion (SPD) determines the entire structure of the corruption.
We show that SPD improves on strong baselines without any hypertemporal tuning and outperforms all previous Diffusion Models based on image blurring.
Our work sheds new light on made observations in recent works and provides a new approach to improve diffusion models on images and other types of data.
arXiv Detail & Related papers (2023-06-01T09:53:35Z) - Ambient Diffusion: Learning Clean Distributions from Corrupted Data [77.34772355241901]
We present the first diffusion-based framework that can learn an unknown distribution using only highly-corrupted samples.
Another benefit of our approach is the ability to train generative models that are less likely to memorize individual training samples.
arXiv Detail & Related papers (2023-05-30T17:43:33Z) - Corruption-Robust Algorithms with Uncertainty Weighting for Nonlinear
Contextual Bandits and Markov Decision Processes [59.61248760134937]
We propose an efficient algorithm to achieve a regret of $tildeO(sqrtT+zeta)$.
The proposed algorithm relies on the recently developed uncertainty-weighted least-squares regression from linear contextual bandit.
We generalize our algorithm to the episodic MDP setting and first achieve an additive dependence on the corruption level $zeta$.
arXiv Detail & Related papers (2022-12-12T15:04:56Z) - A simple way to make neural networks robust against diverse image
corruptions [29.225922892332342]
We show that a simple but properly tuned training with additive Gaussian and Speckle noise generalizes surprisingly well to unseen corruptions.
An adversarial training of the recognition model against uncorrelated worst-case noise leads to an additional increase in performance.
arXiv Detail & Related papers (2020-01-16T20:10:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.