Few-Shot Diffusion Models
- URL: http://arxiv.org/abs/2205.15463v1
- Date: Mon, 30 May 2022 23:20:33 GMT
- Title: Few-Shot Diffusion Models
- Authors: Giorgio Giannone, Didrik Nielsen, Ole Winther
- Abstract summary: We present Few-Shot Diffusion Models (FSDM), a framework for few-shot generation leveraging conditional DDPMs.
FSDM is trained to adapt the generative process conditioned on a small set of images from a given class by aggregating image patch information.
We empirically show that FSDM can perform few-shot generation and transfer to new datasets.
- Score: 15.828257653106537
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Denoising diffusion probabilistic models (DDPM) are powerful hierarchical
latent variable models with remarkable sample generation quality and training
stability. These properties can be attributed to parameter sharing in the
generative hierarchy, as well as a parameter-free diffusion-based inference
procedure. In this paper, we present Few-Shot Diffusion Models (FSDM), a
framework for few-shot generation leveraging conditional DDPMs. FSDMs are
trained to adapt the generative process conditioned on a small set of images
from a given class by aggregating image patch information using a set-based
Vision Transformer (ViT). At test time, the model is able to generate samples
from previously unseen classes conditioned on as few as 5 samples from that
class. We empirically show that FSDM can perform few-shot generation and
transfer to new datasets. We benchmark variants of our method on complex vision
datasets for few-shot learning and compare to unconditional and conditional
DDPM baselines. Additionally, we show how conditioning the model on patch-based
input set information improves training convergence.
Related papers
- Conditional Distribution Modelling for Few-Shot Image Synthesis with Diffusion Models [29.821909424996015]
Few-shot image synthesis entails generating diverse and realistic images of novel categories using only a few example images.
We propose Conditional Distribution Modelling (CDM) -- a framework which effectively utilizes Diffusion models for few-shot image generation.
arXiv Detail & Related papers (2024-04-25T12:11:28Z) - A Phase Transition in Diffusion Models Reveals the Hierarchical Nature
of Data [55.748186000425996]
Recent advancements show that diffusion models can generate high-quality images.
We study this phenomenon in a hierarchical generative model of data.
Our analysis characterises the relationship between time and scale in diffusion models.
arXiv Detail & Related papers (2024-02-26T19:52:33Z) - Training Class-Imbalanced Diffusion Model Via Overlap Optimization [55.96820607533968]
Diffusion models trained on real-world datasets often yield inferior fidelity for tail classes.
Deep generative models, including diffusion models, are biased towards classes with abundant training images.
We propose a method based on contrastive learning to minimize the overlap between distributions of synthetic images for different classes.
arXiv Detail & Related papers (2024-02-16T16:47:21Z) - Improving Denoising Diffusion Probabilistic Models via Exploiting Shared
Representations [5.517338199249029]
SR-DDPM is a class of generative models that produce high-quality images by reversing a noisy diffusion process.
By exploiting the similarity between diverse data distributions, our method can scale to multiple tasks without compromising the image quality.
We evaluate our method on standard image datasets and show that it outperforms both unconditional and conditional DDPM in terms of FID and SSIM metrics.
arXiv Detail & Related papers (2023-11-27T22:30:26Z) - Diffusion-TTA: Test-time Adaptation of Discriminative Models via
Generative Feedback [97.0874638345205]
generative models can be great test-time adapters for discriminative models.
Our method, Diffusion-TTA, adapts pre-trained discriminative models to each unlabelled example in the test set.
We show Diffusion-TTA significantly enhances the accuracy of various large-scale pre-trained discriminative models.
arXiv Detail & Related papers (2023-11-27T18:59:53Z) - Diff-Instruct: A Universal Approach for Transferring Knowledge From
Pre-trained Diffusion Models [77.83923746319498]
We propose a framework called Diff-Instruct to instruct the training of arbitrary generative models.
We show that Diff-Instruct results in state-of-the-art single-step diffusion-based models.
Experiments on refining GAN models show that the Diff-Instruct can consistently improve the pre-trained generators of GAN models.
arXiv Detail & Related papers (2023-05-29T04:22:57Z) - ShiftDDPMs: Exploring Conditional Diffusion Models by Shifting Diffusion
Trajectories [144.03939123870416]
We propose a novel conditional diffusion model by introducing conditions into the forward process.
We use extra latent space to allocate an exclusive diffusion trajectory for each condition based on some shifting rules.
We formulate our method, which we call textbfShiftDDPMs, and provide a unified point of view on existing related methods.
arXiv Detail & Related papers (2023-02-05T12:48:21Z) - Fast Inference in Denoising Diffusion Models via MMD Finetuning [23.779985842891705]
We present MMD-DDM, a novel method for fast sampling of diffusion models.
Our approach is based on the idea of using the Maximum Mean Discrepancy (MMD) to finetune the learned distribution with a given budget of timesteps.
Our findings show that the proposed method is able to produce high-quality samples in a fraction of the time required by widely-used diffusion models.
arXiv Detail & Related papers (2023-01-19T09:48:07Z) - f-DM: A Multi-stage Diffusion Model via Progressive Signal
Transformation [56.04628143914542]
Diffusion models (DMs) have recently emerged as SoTA tools for generative modeling in various domains.
We propose f-DM, a generalized family of DMs which allows progressive signal transformation.
We apply f-DM in image generation tasks with a range of functions, including down-sampling, blurring, and learned transformations.
arXiv Detail & Related papers (2022-10-10T18:49:25Z) - Hierarchical Few-Shot Generative Models [18.216729811514718]
We study a latent variables approach that extends the Neural Statistician to a fully hierarchical approach with an attention-based point to set-level aggregation.
Our results show that the hierarchical formulation better captures the intrinsic variability within the sets in the small data regime.
arXiv Detail & Related papers (2021-10-23T19:19:39Z) - Improved Denoising Diffusion Probabilistic Models [4.919647298882951]
We show that DDPMs can achieve competitive log-likelihoods while maintaining high sample quality.
We also find that learning variances of the reverse diffusion process allows sampling with an order of magnitude fewer forward passes.
We show that the sample quality and likelihood of these models scale smoothly with model capacity and training compute, making them easily scalable.
arXiv Detail & Related papers (2021-02-18T23:44:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.