CONFORM: Contrast is All You Need For High-Fidelity Text-to-Image
Diffusion Models
- URL: http://arxiv.org/abs/2312.06059v1
- Date: Mon, 11 Dec 2023 01:42:15 GMT
- Title: CONFORM: Contrast is All You Need For High-Fidelity Text-to-Image
Diffusion Models
- Authors: Tuna Han Salih Meral, Enis Simsar, Federico Tombari, Pinar Yanardag
- Abstract summary: Images produced by text-to-image diffusion models might not always faithfully represent the semantic intent of the provided text prompt.
Our work introduces a novel perspective by tackling this challenge in a contrastive context.
We conduct extensive experiments across a wide variety of scenarios, each involving unique combinations of objects, attributes, and scenes.
- Score: 48.10798436003449
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Images produced by text-to-image diffusion models might not always faithfully
represent the semantic intent of the provided text prompt, where the model
might overlook or entirely fail to produce certain objects. Existing solutions
often require customly tailored functions for each of these problems, leading
to sub-optimal results, especially for complex prompts. Our work introduces a
novel perspective by tackling this challenge in a contrastive context. Our
approach intuitively promotes the segregation of objects in attention maps
while also maintaining that pairs of related attributes are kept close to each
other. We conduct extensive experiments across a wide variety of scenarios,
each involving unique combinations of objects, attributes, and scenes. These
experiments effectively showcase the versatility, efficiency, and flexibility
of our method in working with both latent and pixel-based diffusion models,
including Stable Diffusion and Imagen. Moreover, we publicly share our source
code to facilitate further research.
Related papers
- Prompt-Consistency Image Generation (PCIG): A Unified Framework Integrating LLMs, Knowledge Graphs, and Controllable Diffusion Models [20.19571676239579]
We introduce a novel diffusion-based framework to enhance the alignment of generated images with their corresponding descriptions.
Our framework is built upon a comprehensive analysis of inconsistency phenomena, categorizing them based on their manifestation in the image.
We then integrate a state-of-the-art controllable image generation model with a visual text generation module to generate an image that is consistent with the original prompt.
arXiv Detail & Related papers (2024-06-24T06:12:16Z) - Be Yourself: Bounded Attention for Multi-Subject Text-to-Image Generation [60.943159830780154]
We introduce Bounded Attention, a training-free method for bounding the information flow in the sampling process.
We demonstrate that our method empowers the generation of multiple subjects that better align with given prompts and layouts.
arXiv Detail & Related papers (2024-03-25T17:52:07Z) - Text-to-Image Diffusion Models are Great Sketch-Photo Matchmakers [120.49126407479717]
This paper explores text-to-image diffusion models for Zero-Shot Sketch-based Image Retrieval (ZS-SBIR)
We highlight a pivotal discovery: the capacity of text-to-image diffusion models to seamlessly bridge the gap between sketches and photos.
arXiv Detail & Related papers (2024-03-12T00:02:03Z) - Leveraging Open-Vocabulary Diffusion to Camouflaged Instance
Segmentation [59.78520153338878]
Text-to-image diffusion techniques have shown exceptional capability of producing high-quality images from text descriptions.
We propose a method built upon a state-of-the-art diffusion model, empowered by open-vocabulary to learn multi-scale textual-visual features for camouflaged object representations.
arXiv Detail & Related papers (2023-12-29T07:59:07Z) - Person Image Synthesis via Denoising Diffusion Model [116.34633988927429]
We show how denoising diffusion models can be applied for high-fidelity person image synthesis.
Our results on two large-scale benchmarks and a user study demonstrate the photorealism of our proposed approach under challenging scenarios.
arXiv Detail & Related papers (2022-11-22T18:59:50Z) - Explicit and implicit models in infrared and visible image fusion [5.842112272932475]
This paper discusses the limitations of deep learning models in image fusion and the corresponding optimization strategies.
Ten models for comparison experiments on 21 test sets were screened.
The qualitative and quantitative results show that the implicit models have more comprehensive ability to learn image features.
arXiv Detail & Related papers (2022-06-20T06:05:09Z) - Compositional Visual Generation with Composable Diffusion Models [80.75258849913574]
We propose an alternative structured approach for compositional generation using diffusion models.
An image is generated by composing a set of diffusion models, with each of them modeling a certain component of the image.
The proposed method can generate scenes at test time that are substantially more complex than those seen in training.
arXiv Detail & Related papers (2022-06-03T17:47:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.