Diffusion Cocktail: Fused Generation from Diffusion Models
- URL: http://arxiv.org/abs/2312.08873v1
- Date: Tue, 12 Dec 2023 00:53:56 GMT
- Title: Diffusion Cocktail: Fused Generation from Diffusion Models
- Authors: Haoming Liu, Yuanhe Guo, Shengjie Wang, Hongyi Wen
- Abstract summary: Diffusion models excel at generating high-quality images and are easy to extend.
Recent work has focused on uncovering semantic and visual information encoded in various components of a diffusion model.
We propose Diffusion Cocktail (Ditail), a training-free method that can accurately transfer content information between two diffusion models.
- Score: 8.307057730976432
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Diffusion models excel at generating high-quality images and are easy to
extend, making them extremely popular among active users who have created an
extensive collection of diffusion models with various styles by fine-tuning
base models such as Stable Diffusion. Recent work has focused on uncovering
semantic and visual information encoded in various components of a diffusion
model, enabling better generation quality and more fine-grained control.
However, those methods target improving a single model and overlook the vastly
available collection of fine-tuned diffusion models. In this work, we study the
combinations of diffusion models. We propose Diffusion Cocktail (Ditail), a
training-free method that can accurately transfer content information between
two diffusion models. This allows us to perform diverse generations using a set
of diffusion models, resulting in novel images that are unlikely to be obtained
by a single model alone. We also explore utilizing Ditail for style transfer,
with the target style set by a diffusion model instead of an image. Ditail
offers a more detailed manipulation of the diffusion generation, thereby
enabling the vast community to integrate various styles and contents seamlessly
and generate any content of any style.
Related papers
- Diffusion Models for Multi-Task Generative Modeling [32.61765315067488]
We propose a principled way to define a diffusion model by constructing a unified multi-modal diffusion model in a common diffusion space.
We propose several multimodal generation settings to verify our framework, including image transition, masked-image training, joint image-label and joint image-representation generative modeling.
arXiv Detail & Related papers (2024-07-24T18:04:17Z) - Training Class-Imbalanced Diffusion Model Via Overlap Optimization [55.96820607533968]
Diffusion models trained on real-world datasets often yield inferior fidelity for tail classes.
Deep generative models, including diffusion models, are biased towards classes with abundant training images.
We propose a method based on contrastive learning to minimize the overlap between distributions of synthetic images for different classes.
arXiv Detail & Related papers (2024-02-16T16:47:21Z) - Guided Diffusion from Self-Supervised Diffusion Features [49.78673164423208]
Guidance serves as a key concept in diffusion models, yet its effectiveness is often limited by the need for extra data annotation or pretraining.
We propose a framework to extract guidance from, and specifically for, diffusion models.
arXiv Detail & Related papers (2023-12-14T11:19:11Z) - Interactive Fashion Content Generation Using LLMs and Latent Diffusion
Models [0.0]
Fashionable image generation aims to synthesize images of diverse fashion prevalent around the globe.
We propose a method exploiting the equivalence between diffusion models and energy-based models (EBMs)
Our results indicate that using an LLM to refine the prompts to the latent diffusion model assists in generating globally creative and culturally diversified fashion styles.
arXiv Detail & Related papers (2023-05-15T18:38:25Z) - Your Diffusion Model is Secretly a Zero-Shot Classifier [90.40799216880342]
We show that density estimates from large-scale text-to-image diffusion models can be leveraged to perform zero-shot classification.
Our generative approach to classification attains strong results on a variety of benchmarks.
Our results are a step toward using generative over discriminative models for downstream tasks.
arXiv Detail & Related papers (2023-03-28T17:59:56Z) - Extracting Training Data from Diffusion Models [77.11719063152027]
We show that diffusion models memorize individual images from their training data and emit them at generation time.
With a generate-and-filter pipeline, we extract over a thousand training examples from state-of-the-art models.
We train hundreds of diffusion models in various settings to analyze how different modeling and data decisions affect privacy.
arXiv Detail & Related papers (2023-01-30T18:53:09Z) - Diffusion Art or Digital Forgery? Investigating Data Replication in
Diffusion Models [53.03978584040557]
We study image retrieval frameworks that enable us to compare generated images with training samples and detect when content has been replicated.
Applying our frameworks to diffusion models trained on multiple datasets including Oxford flowers, Celeb-A, ImageNet, and LAION, we discuss how factors such as training set size impact rates of content replication.
arXiv Detail & Related papers (2022-12-07T18:58:02Z) - SinDiffusion: Learning a Diffusion Model from a Single Natural Image [159.4285444680301]
We present SinDiffusion, leveraging denoising diffusion models to capture internal distribution of patches from a single natural image.
It is based on two core designs. First, SinDiffusion is trained with a single model at a single scale instead of multiple models with progressive growing of scales.
Second, we identify that a patch-level receptive field of the diffusion network is crucial and effective for capturing the image's patch statistics.
arXiv Detail & Related papers (2022-11-22T18:00:03Z) - Diffusion Models: A Comprehensive Survey of Methods and Applications [10.557289965753437]
Diffusion models are a class of deep generative models that have shown impressive results on various tasks with dense theoretical founding.
Recent studies have shown great enthusiasm on improving the performance of diffusion model.
arXiv Detail & Related papers (2022-09-02T02:59:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.