Few-shot Image Generation with Diffusion Models
- URL: http://arxiv.org/abs/2211.03264v1
- Date: Mon, 7 Nov 2022 02:18:27 GMT
- Title: Few-shot Image Generation with Diffusion Models
- Authors: Jingyuan Zhu, Huimin Ma, Jiansheng Chen, Jian Yuan
- Abstract summary: Denoising diffusion probabilistic models (DDPMs) have been proven capable of synthesizing high-quality images with remarkable diversity when trained on large amounts of data.
Modern approaches are mainly built on Generative Adversarial Networks (GANs) and adapt models pre-trained on large source domains to target domains using a few available samples.
In this paper, we make the first attempt to study when do DDPMs overfit and suffer severe diversity degradation as training data become scarce.
- Score: 18.532357455856836
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Denoising diffusion probabilistic models (DDPMs) have been proven capable of
synthesizing high-quality images with remarkable diversity when trained on
large amounts of data. However, to our knowledge, few-shot image generation
tasks have yet to be studied with DDPM-based approaches. Modern approaches are
mainly built on Generative Adversarial Networks (GANs) and adapt models
pre-trained on large source domains to target domains using a few available
samples. In this paper, we make the first attempt to study when do DDPMs
overfit and suffer severe diversity degradation as training data become scarce.
Then we propose to adapt DDPMs pre-trained on large source domains to target
domains using limited data. Our results show that utilizing knowledge from
pre-trained DDPMs can significantly accelerate convergence and improve the
quality and diversity of the generated images. Moreover, we propose a
DDPM-based pairwise similarity loss to preserve the relative distances between
generated samples during domain adaptation. In this way, we further improve the
generation diversity of the proposed DDPM-based approaches. We demonstrate the
effectiveness of our approaches qualitatively and quantitatively on a series of
few-shot image generation tasks and achieve results better than current
state-of-the-art GAN-based approaches in quality and diversity.
Related papers
- Diffuse-UDA: Addressing Unsupervised Domain Adaptation in Medical Image Segmentation with Appearance and Structure Aligned Diffusion Models [31.006056670998852]
The scarcity and complexity of voxel-level annotations in 3D medical imaging present significant challenges.
This disparity affects the fairness of artificial intelligence algorithms in healthcare.
We introduce Diffuse-UDA, a novel method leveraging diffusion models to tackle Unsupervised Domain Adaptation (UDA) in medical image segmentation.
arXiv Detail & Related papers (2024-08-12T08:21:04Z) - SAR Image Synthesis with Diffusion Models [0.0]
diffusion models (DMs) have become a popular method for generating synthetic data.
In this work, a specific type of DMs, namely denoising diffusion probabilistic model (DDPM) is adapted to the SAR domain.
We show that DDPM qualitatively and quantitatively outperforms state-of-the-art GAN-based methods for SAR image generation.
arXiv Detail & Related papers (2024-05-13T14:21:18Z) - DetDiffusion: Synergizing Generative and Perceptive Models for Enhanced Data Generation and Perception [78.26734070960886]
Current perceptive models heavily depend on resource-intensive datasets.
We introduce perception-aware loss (P.A. loss) through segmentation, improving both quality and controllability.
Our method customizes data augmentation by extracting and utilizing perception-aware attribute (P.A. Attr) during generation.
arXiv Detail & Related papers (2024-03-20T04:58:03Z) - On Inference Stability for Diffusion Models [6.846175045133414]
Denoising Probabilistic Models (DPMs) represent an emerging domain of generative models that excel in generating diverse and high-quality images.
Most current training methods for DPMs often neglect the correlation between timesteps, limiting the model's performance in generating images effectively.
We propose a novel textVinitsequence-aware loss that aims to reduce the estimation gap to enhance the sampling quality.
arXiv Detail & Related papers (2023-12-19T18:57:34Z) - Subject-Based Domain Adaptation for Facial Expression Recognition [51.10374151948157]
Adapting a deep learning model to a specific target individual is a challenging facial expression recognition task.
This paper introduces a new MSDA method for subject-based domain adaptation in FER.
It efficiently leverages information from multiple source subjects to adapt a deep FER model to a single target individual.
arXiv Detail & Related papers (2023-12-09T18:40:37Z) - Learning Energy-Based Models by Cooperative Diffusion Recovery Likelihood [64.95663299945171]
Training energy-based models (EBMs) on high-dimensional data can be both challenging and time-consuming.
There exists a noticeable gap in sample quality between EBMs and other generative frameworks like GANs and diffusion models.
We propose cooperative diffusion recovery likelihood (CDRL), an effective approach to tractably learn and sample from a series of EBMs.
arXiv Detail & Related papers (2023-09-10T22:05:24Z) - ArSDM: Colonoscopy Images Synthesis with Adaptive Refinement Semantic
Diffusion Models [69.9178140563928]
Colonoscopy analysis is essential for assisting clinical diagnosis and treatment.
The scarcity of annotated data limits the effectiveness and generalization of existing methods.
We propose an Adaptive Refinement Semantic Diffusion Model (ArSDM) to generate colonoscopy images that benefit the downstream tasks.
arXiv Detail & Related papers (2023-09-03T07:55:46Z) - Efficient Transfer Learning in Diffusion Models via Adversarial Noise [21.609168219488982]
Diffusion Probabilistic Models (DPMs) have demonstrated substantial promise in image generation tasks.
Previous works, like GANs, have tackled the limited data problem by transferring pre-trained models learned with sufficient data.
We propose a novel DPMs-based transfer learning method, TAN, to address the limited data problem.
arXiv Detail & Related papers (2023-08-23T06:44:44Z) - DomainStudio: Fine-Tuning Diffusion Models for Domain-Driven Image
Generation using Limited Data [20.998032566820907]
This paper proposes a novel DomainStudio approach to adapt DDPMs pre-trained on large-scale source datasets to target domains using limited data.
It is designed to keep the diversity of subjects provided by source domains and get high-quality and diverse adapted samples in target domains.
arXiv Detail & Related papers (2023-06-25T07:40:39Z) - Source-free Domain Adaptation Requires Penalized Diversity [60.04618512479438]
Source-free domain adaptation (SFDA) was introduced to address knowledge transfer between different domains in the absence of source data.
In unsupervised SFDA, the diversity is limited to learning a single hypothesis on the source or learning multiple hypotheses with a shared feature extractor.
We propose a novel unsupervised SFDA algorithm that promotes representational diversity through the use of separate feature extractors.
arXiv Detail & Related papers (2023-04-06T00:20:19Z) - Making Substitute Models More Bayesian Can Enhance Transferability of
Adversarial Examples [89.85593878754571]
transferability of adversarial examples across deep neural networks is the crux of many black-box attacks.
We advocate to attack a Bayesian model for achieving desirable transferability.
Our method outperforms recent state-of-the-arts by large margins.
arXiv Detail & Related papers (2023-02-10T07:08:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.