Efficient Transfer Learning in Diffusion Models via Adversarial Noise
- URL: http://arxiv.org/abs/2308.11948v1
- Date: Wed, 23 Aug 2023 06:44:44 GMT
- Title: Efficient Transfer Learning in Diffusion Models via Adversarial Noise
- Authors: Xiyu Wang, Baijiong Lin, Daochang Liu, Chang Xu
- Abstract summary: Diffusion Probabilistic Models (DPMs) have demonstrated substantial promise in image generation tasks.
Previous works, like GANs, have tackled the limited data problem by transferring pre-trained models learned with sufficient data.
We propose a novel DPMs-based transfer learning method, TAN, to address the limited data problem.
- Score: 21.609168219488982
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Diffusion Probabilistic Models (DPMs) have demonstrated substantial promise
in image generation tasks but heavily rely on the availability of large amounts
of training data. Previous works, like GANs, have tackled the limited data
problem by transferring pre-trained models learned with sufficient data.
However, those methods are hard to be utilized in DPMs since the distinct
differences between DPM-based and GAN-based methods, showing in the unique
iterative denoising process integral and the need for many timesteps with
no-targeted noise in DPMs. In this paper, we propose a novel DPMs-based
transfer learning method, TAN, to address the limited data problem. It includes
two strategies: similarity-guided training, which boosts transfer with a
classifier, and adversarial noise selection which adaptive chooses targeted
noise based on the input image. Extensive experiments in the context of
few-shot image generation tasks demonstrate that our method is not only
efficient but also excels in terms of image quality and diversity when compared
to existing GAN-based and DDPM-based methods.
Related papers
- Improving Denoising Diffusion Probabilistic Models via Exploiting Shared
Representations [5.517338199249029]
SR-DDPM is a class of generative models that produce high-quality images by reversing a noisy diffusion process.
By exploiting the similarity between diverse data distributions, our method can scale to multiple tasks without compromising the image quality.
We evaluate our method on standard image datasets and show that it outperforms both unconditional and conditional DDPM in terms of FID and SSIM metrics.
arXiv Detail & Related papers (2023-11-27T22:30:26Z) - Denoising Diffusion Bridge Models [54.87947768074036]
Diffusion models are powerful generative models that map noise to data using processes.
For many applications such as image editing, the model input comes from a distribution that is not random noise.
In our work, we propose Denoising Diffusion Bridge Models (DDBMs)
arXiv Detail & Related papers (2023-09-29T03:24:24Z) - Diffusion Model as Representation Learner [86.09969334071478]
Diffusion Probabilistic Models (DPMs) have recently demonstrated impressive results on various generative tasks.
We propose a novel knowledge transfer method that leverages the knowledge acquired by DPMs for recognition tasks.
arXiv Detail & Related papers (2023-08-21T00:38:39Z) - BOOT: Data-free Distillation of Denoising Diffusion Models with
Bootstrapping [64.54271680071373]
Diffusion models have demonstrated excellent potential for generating diverse images.
Knowledge distillation has been recently proposed as a remedy that can reduce the number of inference steps to one or a few.
We present a novel technique called BOOT, that overcomes limitations with an efficient data-free distillation algorithm.
arXiv Detail & Related papers (2023-06-08T20:30:55Z) - Post-training Quantization on Diffusion Models [14.167428759401703]
Denoising diffusion (score-based) generative models have recently achieved significant accomplishments in generating realistic and diverse data.
These approaches define a forward diffusion process for transforming data into noise and a backward denoising process for sampling data from noise.
Unfortunately, the generation process of current denoising diffusion models is notoriously slow due to the lengthy iterative noise estimations.
arXiv Detail & Related papers (2022-11-28T19:33:39Z) - Few-shot Image Generation with Diffusion Models [18.532357455856836]
Denoising diffusion probabilistic models (DDPMs) have been proven capable of synthesizing high-quality images with remarkable diversity when trained on large amounts of data.
Modern approaches are mainly built on Generative Adversarial Networks (GANs) and adapt models pre-trained on large source domains to target domains using a few available samples.
In this paper, we make the first attempt to study when do DDPMs overfit and suffer severe diversity degradation as training data become scarce.
arXiv Detail & Related papers (2022-11-07T02:18:27Z) - Blind Image Deblurring with Unknown Kernel Size and Substantial Noise [1.346207204106034]
Blind image deblurring (BID) has been extensively studied in computer vision and adjacent fields.
We propose a practical BID method that is stable against both, the first of its kind.
Our method builds on the recent ideas of solving inverse problems by integrating the physical models and structured deep neural networks.
arXiv Detail & Related papers (2022-08-18T17:24:45Z) - Accelerating Diffusion Models via Early Stop of the Diffusion Process [114.48426684994179]
Denoising Diffusion Probabilistic Models (DDPMs) have achieved impressive performance on various generation tasks.
In practice DDPMs often need hundreds even thousands of denoising steps to obtain a high-quality sample.
We propose a principled acceleration strategy, referred to as Early-Stopped DDPM (ES-DDPM), for DDPMs.
arXiv Detail & Related papers (2022-05-25T06:40:09Z) - Robust Face Anti-Spoofing with Dual Probabilistic Modeling [49.14353429234298]
We propose a unified framework called Dual Probabilistic Modeling (DPM), with two dedicated modules, DPM-LQ (Label Quality aware learning) and DPM-DQ (Data Quality aware learning)
DPM-LQ is able to produce robust feature representations without overfitting to the distribution of noisy semantic labels.
DPM-DQ can eliminate data noise from False Reject' and False Accept' during inference by correcting the prediction confidence of noisy data based on its quality distribution.
arXiv Detail & Related papers (2022-04-27T03:44:18Z) - Denoising Diffusion Implicit Models [117.03720513930335]
We present denoising diffusion implicit models (DDIMs) for iterative implicit probabilistic models with the same training procedure as DDPMs.
DDIMs can produce high quality samples $10 times$ to $50 times$ faster in terms of wall-clock time compared to DDPMs.
arXiv Detail & Related papers (2020-10-06T06:15:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.