ConceptLab: Creative Concept Generation using VLM-Guided Diffusion Prior
Constraints
- URL: http://arxiv.org/abs/2308.02669v2
- Date: Sun, 17 Dec 2023 22:04:14 GMT
- Title: ConceptLab: Creative Concept Generation using VLM-Guided Diffusion Prior
Constraints
- Authors: Elad Richardson, Kfir Goldberg, Yuval Alaluf, Daniel Cohen-Or
- Abstract summary: We present the task of creative text-to-image generation, where we seek to generate new members of a broad category.
We show that the creative generation problem can be formulated as an optimization process over the output space of the diffusion prior.
We incorporate a question-answering Vision-Language Model (VLM) that adaptively adds new constraints to the optimization problem, encouraging the model to discover increasingly more unique creations.
- Score: 56.824187892204314
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent text-to-image generative models have enabled us to transform our words
into vibrant, captivating imagery. The surge of personalization techniques that
has followed has also allowed us to imagine unique concepts in new scenes.
However, an intriguing question remains: How can we generate a new, imaginary
concept that has never been seen before? In this paper, we present the task of
creative text-to-image generation, where we seek to generate new members of a
broad category (e.g., generating a pet that differs from all existing pets). We
leverage the under-studied Diffusion Prior models and show that the creative
generation problem can be formulated as an optimization process over the output
space of the diffusion prior, resulting in a set of "prior constraints". To
keep our generated concept from converging into existing members, we
incorporate a question-answering Vision-Language Model (VLM) that adaptively
adds new constraints to the optimization problem, encouraging the model to
discover increasingly more unique creations. Finally, we show that our prior
constraints can also serve as a strong mixing mechanism allowing us to create
hybrids between generated concepts, introducing even more flexibility into the
creative process.
Related papers
- How to Continually Adapt Text-to-Image Diffusion Models for Flexible Customization? [91.49559116493414]
We propose a novel Concept-Incremental text-to-image Diffusion Model (CIDM)
It can resolve catastrophic forgetting and concept neglect to learn new customization tasks in a concept-incremental manner.
Experiments validate that our CIDM surpasses existing custom diffusion models.
arXiv Detail & Related papers (2024-10-23T06:47:29Z) - MC$^2$: Multi-concept Guidance for Customized Multi-concept Generation [49.935634230341904]
We introduce the Multi-concept guidance for Multi-concept customization, termed MC$2$, for improved flexibility and fidelity.
MC$2$ decouples the requirements for model architecture via inference time optimization.
It adaptively refines the attention weights between visual and textual tokens, directing image regions to focus on their associated words.
arXiv Detail & Related papers (2024-04-08T07:59:04Z) - Attention Calibration for Disentangled Text-to-Image Personalization [12.339742346826403]
We propose an attention calibration mechanism to improve the concept-level understanding of the T2I model.
We demonstrate that our method outperforms the current state of the art in both qualitative and quantitative evaluations.
arXiv Detail & Related papers (2024-03-27T13:31:39Z) - DreamCreature: Crafting Photorealistic Virtual Creatures from
Imagination [140.1641573781066]
We introduce a novel task, Virtual Creatures Generation: Given a set of unlabeled images of the target concepts, we aim to train a T2I model capable of creating new, hybrid concepts.
We propose a new method called DreamCreature, which identifies and extracts the underlying sub-concepts.
The T2I thus adapts to generate novel concepts with faithful structures and photorealistic appearance.
arXiv Detail & Related papers (2023-11-27T01:24:31Z) - Multi-Concept T2I-Zero: Tweaking Only The Text Embeddings and Nothing
Else [75.6806649860538]
We consider a more ambitious goal: natural multi-concept generation using a pre-trained diffusion model.
We observe concept dominance and non-localized contribution that severely degrade multi-concept generation performance.
We design a minimal low-cost solution that overcomes the above issues by tweaking the text embeddings for more realistic multi-concept text-to-image generation.
arXiv Detail & Related papers (2023-10-11T12:05:44Z) - Multi-Concept Customization of Text-to-Image Diffusion [51.8642043743222]
We propose Custom Diffusion, an efficient method for augmenting existing text-to-image models.
We find that only optimizing a few parameters in the text-to-image conditioning mechanism is sufficiently powerful to represent new concepts.
Our model generates variations of multiple new concepts and seamlessly composes them with existing concepts in novel settings.
arXiv Detail & Related papers (2022-12-08T18:57:02Z) - Challenges in creative generative models for music: a divergence
maximization perspective [3.655021726150369]
Development of generative Machine Learning models in creative practices is raising more interest among artists, practitioners and performers.
Most models are still unable to generate content that lay outside of the domain defined by the training dataset.
We propose an alternative prospective framework, starting from a new general formulation of ML objectives.
arXiv Detail & Related papers (2022-11-16T12:02:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.