CONFORM: Contrast is All You Need For High-Fidelity Text-to-Image
Diffusion Models
- URL: http://arxiv.org/abs/2312.06059v1
- Date: Mon, 11 Dec 2023 01:42:15 GMT
- Title: CONFORM: Contrast is All You Need For High-Fidelity Text-to-Image
Diffusion Models
- Authors: Tuna Han Salih Meral, Enis Simsar, Federico Tombari, Pinar Yanardag
- Abstract summary: Images produced by text-to-image diffusion models might not always faithfully represent the semantic intent of the provided text prompt.
Our work introduces a novel perspective by tackling this challenge in a contrastive context.
We conduct extensive experiments across a wide variety of scenarios, each involving unique combinations of objects, attributes, and scenes.
- Score: 48.10798436003449
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Images produced by text-to-image diffusion models might not always faithfully
represent the semantic intent of the provided text prompt, where the model
might overlook or entirely fail to produce certain objects. Existing solutions
often require customly tailored functions for each of these problems, leading
to sub-optimal results, especially for complex prompts. Our work introduces a
novel perspective by tackling this challenge in a contrastive context. Our
approach intuitively promotes the segregation of objects in attention maps
while also maintaining that pairs of related attributes are kept close to each
other. We conduct extensive experiments across a wide variety of scenarios,
each involving unique combinations of objects, attributes, and scenes. These
experiments effectively showcase the versatility, efficiency, and flexibility
of our method in working with both latent and pixel-based diffusion models,
including Stable Diffusion and Imagen. Moreover, we publicly share our source
code to facilitate further research.
Related papers
- Human-Object Interaction Detection Collaborated with Large Relation-driven Diffusion Models [65.82564074712836]
We introduce DIFfusionHOI, a new HOI detector shedding light on text-to-image diffusion models.
We first devise an inversion-based strategy to learn the expression of relation patterns between humans and objects in embedding space.
These learned relation embeddings then serve as textual prompts, to steer diffusion models generate images that depict specific interactions.
arXiv Detail & Related papers (2024-10-26T12:00:33Z) - Progressive Compositionality In Text-to-Image Generative Models [33.18510121342558]
We propose EvoGen, a new curriculum for contrastive learning of diffusion models.
In this work, we leverage large-language models (LLMs) to compose realistic, complex scenarios.
We also harness Visual-Question Answering (VQA) systems alongside diffusion models to automatically curate a contrastive dataset, ConPair.
arXiv Detail & Related papers (2024-10-22T05:59:29Z) - Fusion from Decomposition: A Self-Supervised Approach for Image Fusion and Beyond [74.96466744512992]
The essence of image fusion is to integrate complementary information from source images.
DeFusion++ produces versatile fused representations that can enhance the quality of image fusion and the effectiveness of downstream high-level vision tasks.
arXiv Detail & Related papers (2024-10-16T06:28:49Z) - Text-to-Image Diffusion Models are Great Sketch-Photo Matchmakers [120.49126407479717]
This paper explores text-to-image diffusion models for Zero-Shot Sketch-based Image Retrieval (ZS-SBIR)
We highlight a pivotal discovery: the capacity of text-to-image diffusion models to seamlessly bridge the gap between sketches and photos.
arXiv Detail & Related papers (2024-03-12T00:02:03Z) - Person Image Synthesis via Denoising Diffusion Model [116.34633988927429]
We show how denoising diffusion models can be applied for high-fidelity person image synthesis.
Our results on two large-scale benchmarks and a user study demonstrate the photorealism of our proposed approach under challenging scenarios.
arXiv Detail & Related papers (2022-11-22T18:59:50Z) - Compositional Visual Generation with Composable Diffusion Models [80.75258849913574]
We propose an alternative structured approach for compositional generation using diffusion models.
An image is generated by composing a set of diffusion models, with each of them modeling a certain component of the image.
The proposed method can generate scenes at test time that are substantially more complex than those seen in training.
arXiv Detail & Related papers (2022-06-03T17:47:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.