Training-free Diffusion Model Adaptation for Variable-Sized
Text-to-Image Synthesis
- URL: http://arxiv.org/abs/2306.08645v2
- Date: Thu, 26 Oct 2023 09:16:25 GMT
- Title: Training-free Diffusion Model Adaptation for Variable-Sized
Text-to-Image Synthesis
- Authors: Zhiyu Jin and Xuli Shen and Bin Li and Xiangyang Xue
- Abstract summary: Diffusion models (DMs) have recently gained attention with state-of-the-art performance in text-to-image synthesis.
This paper focuses on adapting text-to-image diffusion models to handle variety while maintaining visual fidelity.
- Score: 45.19847146506007
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion models (DMs) have recently gained attention with state-of-the-art
performance in text-to-image synthesis. Abiding by the tradition in deep
learning, DMs are trained and evaluated on the images with fixed sizes.
However, users are demanding for various images with specific sizes and various
aspect ratio. This paper focuses on adapting text-to-image diffusion models to
handle such variety while maintaining visual fidelity. First we observe that,
during the synthesis, lower resolution images suffer from incomplete object
portrayal, while higher resolution images exhibit repetitively disordered
presentation. Next, we establish a statistical relationship indicating that
attention entropy changes with token quantity, suggesting that models aggregate
spatial information in proportion to image resolution. The subsequent
interpretation on our observations is that objects are incompletely depicted
due to limited spatial information for low resolutions, while repetitively
disorganized presentation arises from redundant spatial information for high
resolutions. From this perspective, we propose a scaling factor to alleviate
the change of attention entropy and mitigate the defective pattern observed.
Extensive experimental results validate the efficacy of the proposed scaling
factor, enabling models to achieve better visual effects, image quality, and
text alignment. Notably, these improvements are achieved without additional
training or fine-tuning techniques.
Related papers
- Human-Object Interaction Detection Collaborated with Large Relation-driven Diffusion Models [65.82564074712836]
We introduce DIFfusionHOI, a new HOI detector shedding light on text-to-image diffusion models.
We first devise an inversion-based strategy to learn the expression of relation patterns between humans and objects in embedding space.
These learned relation embeddings then serve as textual prompts, to steer diffusion models generate images that depict specific interactions.
arXiv Detail & Related papers (2024-10-26T12:00:33Z) - DaLPSR: Leverage Degradation-Aligned Language Prompt for Real-World Image Super-Resolution [19.33582308829547]
This paper proposes to leverage degradation-aligned language prompt for accurate, fine-grained, and high-fidelity image restoration.
The proposed method achieves a new state-of-the-art perceptual quality level.
arXiv Detail & Related papers (2024-06-24T09:30:36Z) - Enhancing Semantic Fidelity in Text-to-Image Synthesis: Attention
Regulation in Diffusion Models [23.786473791344395]
Cross-attention layers in diffusion models tend to disproportionately focus on certain tokens during the generation process.
We introduce attention regulation, an on-the-fly optimization approach at inference time to align attention maps with the input text prompt.
Experiment results show that our method consistently outperforms other baselines.
arXiv Detail & Related papers (2024-03-11T02:18:27Z) - MaskDiffusion: Boosting Text-to-Image Consistency with Conditional Mask [84.84034179136458]
A crucial factor leading to the text-image mismatch issue is the inadequate cross-modality relation learning.
We propose an adaptive mask, which is conditioned on the attention maps and the prompt embeddings, to dynamically adjust the contribution of each text token to the image features.
Our method, termed MaskDiffusion, is training-free and hot-pluggable for popular pre-trained diffusion models.
arXiv Detail & Related papers (2023-09-08T15:53:37Z) - Cross-domain Compositing with Pretrained Diffusion Models [34.98199766006208]
We employ a localized, iterative refinement scheme which infuses the injected objects with contextual information derived from the background scene.
Our method produces higher quality and realistic results without requiring any annotations or training.
arXiv Detail & Related papers (2023-02-20T18:54:04Z) - DaliID: Distortion-Adaptive Learned Invariance for Identification Models [9.502663556403622]
We propose a methodology called Distortion-Adaptive Learned Invariance for Identification (DaliID) models.
DaliID models achieve state-of-the-art (SOTA) for both face recognition and person re-identification on seven benchmark datasets.
arXiv Detail & Related papers (2023-02-11T18:19:41Z) - Person Image Synthesis via Denoising Diffusion Model [116.34633988927429]
We show how denoising diffusion models can be applied for high-fidelity person image synthesis.
Our results on two large-scale benchmarks and a user study demonstrate the photorealism of our proposed approach under challenging scenarios.
arXiv Detail & Related papers (2022-11-22T18:59:50Z) - Adversarial Semantic Data Augmentation for Human Pose Estimation [96.75411357541438]
We propose Semantic Data Augmentation (SDA), a method that augments images by pasting segmented body parts with various semantic granularity.
We also propose Adversarial Semantic Data Augmentation (ASDA), which exploits a generative network to dynamiclly predict tailored pasting configuration.
State-of-the-art results are achieved on challenging benchmarks.
arXiv Detail & Related papers (2020-08-03T07:56:04Z) - Invertible Image Rescaling [118.2653765756915]
We develop an Invertible Rescaling Net (IRN) to produce visually-pleasing low-resolution images.
We capture the distribution of the lost information using a latent variable following a specified distribution in the downscaling process.
arXiv Detail & Related papers (2020-05-12T09:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.