What is Adversarial Training for Diffusion Models?
- URL: http://arxiv.org/abs/2505.21742v1
- Date: Tue, 27 May 2025 20:32:28 GMT
- Title: What is Adversarial Training for Diffusion Models?
- Authors: Briglia Maria Rosaria, Mujtaba Hussain Mirza, Giuseppe Lisanti, Iacopo Masi,
- Abstract summary: We show that adversarial training (AT) for diffusion models (DMs) fundamentally differs from classifiers.<n>AT is a way to enforce smoothness in the diffusion flow, improving to outliers and corrupted data.<n>We rigorously evaluate our approach with proof-of-concept datasets with known distributions in low- and high-dimensional space.
- Score: 4.71482540145286
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We answer the question in the title, showing that adversarial training (AT) for diffusion models (DMs) fundamentally differs from classifiers: while AT in classifiers enforces output invariance, AT in DMs requires equivariance to keep the diffusion process aligned with the data distribution. AT is a way to enforce smoothness in the diffusion flow, improving robustness to outliers and corrupted data. Unlike prior art, our method makes no assumptions about the noise model and integrates seamlessly into diffusion training by adding random noise, similar to randomized smoothing, or adversarial noise, akin to AT. This enables intrinsic capabilities such as handling noisy data, dealing with extreme variability such as outliers, preventing memorization, and improving robustness. We rigorously evaluate our approach with proof-of-concept datasets with known distributions in low- and high-dimensional space, thereby taking a perfect measure of errors; we further evaluate on standard benchmarks such as CIFAR-10, CelebA and LSUN Bedroom, showing strong performance under severe noise, data corruption, and iterative adversarial attacks.
Related papers
- Diffusion Classifier Guidance for Non-robust Classifiers [0.5999777817331317]
We study the sensitivity of general, non-robust, and robust classifiers to noise of the diffusion process.<n>Non-robust classifiers exhibit significant accuracy degradation under noisy conditions, leading to unstable guidance gradients.<n>We propose a method that utilizes one-step denoised image predictions and implements techniques inspired by optimization methods.
arXiv Detail & Related papers (2025-07-01T11:39:41Z) - ADT: Tuning Diffusion Models with Adversarial Supervision [16.974169058917443]
Diffusion models have achieved outstanding image generation by reversing a forward noising process to approximate true data distributions.<n>We propose Adrial Diffusion Tuning (ADT) to stimulate the inference process during optimization and align the final outputs with training data.<n>ADT features a siamese-network discriminator with a fixed pre-trained backbone and lightweight trainable parameters.
arXiv Detail & Related papers (2025-04-15T17:37:50Z) - Improved Diffusion-based Generative Model with Better Adversarial Robustness [65.38540020916432]
Diffusion Probabilistic Models (DPMs) have achieved significant success in generative tasks.<n>During the denoising process, the input data distributions differ between the training and inference stages.
arXiv Detail & Related papers (2025-02-24T12:29:16Z) - Robust Representation Consistency Model via Contrastive Denoising [83.47584074390842]
randomized smoothing provides theoretical guarantees for certifying robustness against adversarial perturbations.<n> diffusion models have been successfully employed for randomized smoothing to purify noise-perturbed samples.<n>We reformulate the generative modeling task along the diffusion trajectories in pixel space as a discriminative task in the latent space.
arXiv Detail & Related papers (2025-01-22T18:52:06Z) - Breaking Determinism: Fuzzy Modeling of Sequential Recommendation Using Discrete State Space Diffusion Model [66.91323540178739]
Sequential recommendation (SR) aims to predict items that users may be interested in based on their historical behavior.
We revisit SR from a novel information-theoretic perspective and find that sequential modeling methods fail to adequately capture randomness and unpredictability of user behavior.
Inspired by fuzzy information processing theory, this paper introduces the fuzzy sets of interaction sequences to overcome the limitations and better capture the evolution of users' real interests.
arXiv Detail & Related papers (2024-10-31T14:52:01Z) - Free Hunch: Denoiser Covariance Estimation for Diffusion Models Without Extra Costs [25.784316302130875]
Covariance for clean data given a noisy observation is an important quantity in many training-free guided generation methods for diffusion models.<n>We propose a new framework that sidesteps these issues by using covariance information available for free from training data and the curvature of the generative trajectory.
arXiv Detail & Related papers (2024-10-15T00:23:09Z) - Struggle with Adversarial Defense? Try Diffusion [8.274506117450628]
Adrial attacks induce misclassification by introducing subtle perturbations.
diffusion-based adversarial training often encounters convergence challenges and high computational expenses.
We propose the Truth Maximization Diffusion (TMDC) to overcome these issues.
arXiv Detail & Related papers (2024-04-12T06:52:40Z) - Consistent Diffusion Meets Tweedie: Training Exact Ambient Diffusion Models with Noisy Data [74.2507346810066]
Ambient diffusion is a recently proposed framework for training diffusion models using corrupted data.
We present the first framework for training diffusion models that provably sample from the uncorrupted distribution given only noisy training data.
arXiv Detail & Related papers (2024-03-20T14:22:12Z) - Semi-Implicit Denoising Diffusion Models (SIDDMs) [50.30163684539586]
Existing models such as Denoising Diffusion Probabilistic Models (DDPM) deliver high-quality, diverse samples but are slowed by an inherently high number of iterative steps.
We introduce a novel approach that tackles the problem by matching implicit and explicit factors.
We demonstrate that our proposed method obtains comparable generative performance to diffusion-based models and vastly superior results to models with a small number of sampling steps.
arXiv Detail & Related papers (2023-06-21T18:49:22Z) - Diffusion-GAN: Training GANs with Diffusion [135.24433011977874]
Generative adversarial networks (GANs) are challenging to train stably.
We propose Diffusion-GAN, a novel GAN framework that leverages a forward diffusion chain to generate instance noise.
We show that Diffusion-GAN can produce more realistic images with higher stability and data efficiency than state-of-the-art GANs.
arXiv Detail & Related papers (2022-06-05T20:45:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.