Progressive Text-to-Image Diffusion with Soft Latent Direction
- URL: http://arxiv.org/abs/2309.09466v2
- Date: Fri, 19 Jan 2024 03:37:57 GMT
- Title: Progressive Text-to-Image Diffusion with Soft Latent Direction
- Authors: YuTeng Ye, Jiale Cai, Hang Zhou, Guanwen Li, Youjia Zhang, Zikai Song,
Chenxing Gao, Junqing Yu, Wei Yang
- Abstract summary: This paper introduces an innovative progressive synthesis and editing operation that systematically incorporates entities into the target image.
Our proposed framework yields notable advancements in object synthesis, particularly when confronted with intricate and lengthy textual inputs.
- Score: 17.120153452025995
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In spite of the rapidly evolving landscape of text-to-image generation, the
synthesis and manipulation of multiple entities while adhering to specific
relational constraints pose enduring challenges. This paper introduces an
innovative progressive synthesis and editing operation that systematically
incorporates entities into the target image, ensuring their adherence to
spatial and relational constraints at each sequential step. Our key insight
stems from the observation that while a pre-trained text-to-image diffusion
model adeptly handles one or two entities, it often falters when dealing with a
greater number. To address this limitation, we propose harnessing the
capabilities of a Large Language Model (LLM) to decompose intricate and
protracted text descriptions into coherent directives adhering to stringent
formats. To facilitate the execution of directives involving distinct semantic
operations-namely insertion, editing, and erasing-we formulate the Stimulus,
Response, and Fusion (SRF) framework. Within this framework, latent regions are
gently stimulated in alignment with each operation, followed by the fusion of
the responsive latent components to achieve cohesive entity manipulation. Our
proposed framework yields notable advancements in object synthesis,
particularly when confronted with intricate and lengthy textual inputs.
Consequently, it establishes a new benchmark for text-to-image generation
tasks, further elevating the field's performance standards.
Related papers
- Training-free Composite Scene Generation for Layout-to-Image Synthesis [29.186425845897947]
This paper introduces a novel training-free approach designed to overcome adversarial semantic intersections during the diffusion conditioning phase.
We propose two innovative constraints: 1) an inter-token constraint that resolves token conflicts to ensure accurate concept synthesis; and 2) a self-attention constraint that improves pixel-to-pixel relationships.
Our evaluations confirm the effectiveness of leveraging layout information for guiding the diffusion process, generating content-rich images with enhanced fidelity and complexity.
arXiv Detail & Related papers (2024-07-18T15:48:07Z) - MS-Diffusion: Multi-subject Zero-shot Image Personalization with Layout Guidance [6.4680449907623006]
This research introduces the MS-Diffusion framework for layout-guided zero-shot image personalization with multi-subjects.
The proposed multi-subject cross-attention orchestrates inter-subject compositions while preserving the control of texts.
arXiv Detail & Related papers (2024-06-11T12:32:53Z) - Learning Generalizable Human Motion Generator with Reinforcement Learning [95.62084727984808]
Text-driven human motion generation is one of the vital tasks in computer-aided content creation.
Existing methods often overfit specific motion expressions in the training data, hindering their ability to generalize.
We present textbfInstructMotion, which incorporate the trail and error paradigm in reinforcement learning for generalizable human motion generation.
arXiv Detail & Related papers (2024-05-24T13:29:12Z) - Contextualized Diffusion Models for Text-Guided Image and Video Generation [67.69171154637172]
Conditional diffusion models have exhibited superior performance in high-fidelity text-guided visual generation and editing.
We propose a novel and general contextualized diffusion model (ContextDiff) by incorporating the cross-modal context encompassing interactions and alignments between text condition and visual sample.
We generalize our model to both DDPMs and DDIMs with theoretical derivations, and demonstrate the effectiveness of our model in evaluations with two challenging tasks: text-to-image generation, and text-to-video editing.
arXiv Detail & Related papers (2024-02-26T15:01:16Z) - Training-Free Consistent Text-to-Image Generation [80.4814768762066]
Text-to-image models can portray the same subject across diverse prompts.
Existing approaches fine-tune the model to teach it new words that describe specific user-provided subjects.
We present ConsiStory, a training-free approach that enables consistent subject generation by sharing the internal activations of the pretrained model.
arXiv Detail & Related papers (2024-02-05T18:42:34Z) - LoCo: Locally Constrained Training-Free Layout-to-Image Synthesis [24.925757148750684]
We propose a training-free approach for layout-to-image Synthesis that excels in producing high-quality images aligned with both textual prompts and layout instructions.
LoCo seamlessly integrates into existing text-to-image and layout-to-image models, enhancing their performance in spatial control and addressing semantic failures observed in prior methods.
arXiv Detail & Related papers (2023-11-21T04:28:12Z) - Enhancing Object Coherence in Layout-to-Image Synthesis [13.289854750239956]
We propose a novel diffusion model with effective global semantic fusion (GSF) and self-similarity feature enhancement modules.
For semantic coherence, we argue that the image caption contains rich information for defining the semantic relationship within the objects in the images.
To improve the physical coherence, we develop a Self-similarity Coherence Attention synthesis (SCA) module to explicitly integrate local contextual physical coherence relation into each pixel's generation process.
arXiv Detail & Related papers (2023-11-17T13:43:43Z) - LLM Blueprint: Enabling Text-to-Image Generation with Complex and
Detailed Prompts [60.54912319612113]
Diffusion-based generative models have significantly advanced text-to-image generation but encounter challenges when processing lengthy and intricate text prompts.
We present a novel approach leveraging Large Language Models (LLMs) to extract critical components from text prompts.
Our evaluation on complex prompts featuring multiple objects demonstrates a substantial improvement in recall compared to baseline diffusion models.
arXiv Detail & Related papers (2023-10-16T17:57:37Z) - Energy-Based Cross Attention for Bayesian Context Update in
Text-to-Image Diffusion Models [62.603753097900466]
We present a novel energy-based model (EBM) framework for adaptive context control by modeling the posterior of context vectors.
Specifically, we first formulate EBMs of latent image representations and text embeddings in each cross-attention layer of the denoising autoencoder.
Our latent EBMs further allow zero-shot compositional generation as a linear combination of cross-attention outputs from different contexts.
arXiv Detail & Related papers (2023-06-16T14:30:41Z) - DF-GAN: A Simple and Effective Baseline for Text-to-Image Synthesis [80.54273334640285]
We propose a novel one-stage text-to-image backbone that directly synthesizes high-resolution images without entanglements between different generators.
We also propose a novel Target-Aware Discriminator composed of Matching-Aware Gradient Penalty and One-Way Output.
Compared with current state-of-the-art methods, our proposed DF-GAN is simpler but more efficient to synthesize realistic and text-matching images.
arXiv Detail & Related papers (2020-08-13T12:51:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.