Phasic Content Fusing Diffusion Model with Directional Distribution
Consistency for Few-Shot Model Adaption
- URL: http://arxiv.org/abs/2309.03729v1
- Date: Thu, 7 Sep 2023 14:14:11 GMT
- Title: Phasic Content Fusing Diffusion Model with Directional Distribution
Consistency for Few-Shot Model Adaption
- Authors: Teng Hu, Jiangning Zhang, Liang Liu, Ran Yi, Siqi Kou, Haokun Zhu, Xu
Chen, Yabiao Wang, Chengjie Wang, Lizhuang Ma
- Abstract summary: We propose a novel phasic content fusing few-shot diffusion model with directional distribution consistency loss.
Specifically, we design a phasic training strategy with phasic content fusion to help our model learn content and style information when t is large.
Finally, we propose a cross-domain structure guidance strategy that enhances structure consistency during domain adaptation.
- Score: 73.98706049140098
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training a generative model with limited number of samples is a challenging
task. Current methods primarily rely on few-shot model adaption to train the
network. However, in scenarios where data is extremely limited (less than 10),
the generative network tends to overfit and suffers from content degradation.
To address these problems, we propose a novel phasic content fusing few-shot
diffusion model with directional distribution consistency loss, which targets
different learning objectives at distinct training stages of the diffusion
model. Specifically, we design a phasic training strategy with phasic content
fusion to help our model learn content and style information when t is large,
and learn local details of target domain when t is small, leading to an
improvement in the capture of content, style and local details. Furthermore, we
introduce a novel directional distribution consistency loss that ensures the
consistency between the generated and source distributions more efficiently and
stably than the prior methods, preventing our model from overfitting. Finally,
we propose a cross-domain structure guidance strategy that enhances structure
consistency during domain adaptation. Theoretical analysis, qualitative and
quantitative experiments demonstrate the superiority of our approach in
few-shot generative model adaption tasks compared to state-of-the-art methods.
The source code is available at:
https://github.com/sjtuplayer/few-shot-diffusion.
Related papers
- Constrained Diffusion Models via Dual Training [80.03953599062365]
Diffusion processes are prone to generating samples that reflect biases in a training dataset.
We develop constrained diffusion models by imposing diffusion constraints based on desired distributions.
We show that our constrained diffusion models generate new data from a mixture data distribution that achieves the optimal trade-off among objective and constraints.
arXiv Detail & Related papers (2024-08-27T14:25:42Z) - Few-Shot Image Generation by Conditional Relaxing Diffusion Inversion [37.18537753482751]
Conditional Diffusion Relaxing Inversion (CRDI) is designed to enhance distribution diversity in synthetic image generation.
CRDI does not rely on fine-tuning based on only a few samples.
It focuses on reconstructing each target image instance and expanding diversity through few-shot learning.
arXiv Detail & Related papers (2024-07-09T21:58:26Z) - BEND: Bagging Deep Learning Training Based on Efficient Neural Network Diffusion [56.9358325168226]
We propose a Bagging deep learning training algorithm based on Efficient Neural network Diffusion (BEND)
Our approach is simple but effective, first using multiple trained model weights and biases as inputs to train autoencoder and latent diffusion model.
Our proposed BEND algorithm can consistently outperform the mean and median accuracies of both the original trained model and the diffused model.
arXiv Detail & Related papers (2024-03-23T08:40:38Z) - Guided Diffusion from Self-Supervised Diffusion Features [49.78673164423208]
Guidance serves as a key concept in diffusion models, yet its effectiveness is often limited by the need for extra data annotation or pretraining.
We propose a framework to extract guidance from, and specifically for, diffusion models.
arXiv Detail & Related papers (2023-12-14T11:19:11Z) - Diffusion Model for Dense Matching [34.13580888014]
The objective for establishing dense correspondence between paired images consists of two terms: a data term and a prior term.
We propose DiffMatch, a novel conditional diffusion-based framework designed to explicitly model both the data and prior terms.
Our experimental results demonstrate significant performance improvements of our method over existing approaches.
arXiv Detail & Related papers (2023-05-30T14:58:24Z) - A Survey on Generative Diffusion Model [75.93774014861978]
Diffusion models are an emerging class of deep generative models.
They have certain limitations, including a time-consuming iterative generation process and confinement to high-dimensional Euclidean space.
This survey presents a plethora of advanced techniques aimed at enhancing diffusion models.
arXiv Detail & Related papers (2022-09-06T16:56:21Z) - Few Shot Generative Model Adaption via Relaxed Spatial Structural
Alignment [130.84010267004803]
Training a generative adversarial network (GAN) with limited data has been a challenging task.
A feasible solution is to start with a GAN well-trained on a large scale source domain and adapt it to the target domain with a few samples, termed as few shot generative model adaption.
We propose a relaxed spatial structural alignment method to calibrate the target generative models during the adaption.
arXiv Detail & Related papers (2022-03-06T14:26:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.