Break-A-Scene: Extracting Multiple Concepts from a Single Image
- URL: http://arxiv.org/abs/2305.16311v2
- Date: Wed, 4 Oct 2023 07:38:36 GMT
- Title: Break-A-Scene: Extracting Multiple Concepts from a Single Image
- Authors: Omri Avrahami, Kfir Aberman, Ohad Fried, Daniel Cohen-Or, Dani
Lischinski
- Abstract summary: We introduce the task of textual scene decomposition.
We propose augmenting the input image with masks that indicate the presence of target concepts.
We then present a novel two-phase customization process.
- Score: 80.47666266017207
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Text-to-image model personalization aims to introduce a user-provided concept
to the model, allowing its synthesis in diverse contexts. However, current
methods primarily focus on the case of learning a single concept from multiple
images with variations in backgrounds and poses, and struggle when adapted to a
different scenario. In this work, we introduce the task of textual scene
decomposition: given a single image of a scene that may contain several
concepts, we aim to extract a distinct text token for each concept, enabling
fine-grained control over the generated scenes. To this end, we propose
augmenting the input image with masks that indicate the presence of target
concepts. These masks can be provided by the user or generated automatically by
a pre-trained segmentation model. We then present a novel two-phase
customization process that optimizes a set of dedicated textual embeddings
(handles), as well as the model weights, striking a delicate balance between
accurately capturing the concepts and avoiding overfitting. We employ a masked
diffusion loss to enable handles to generate their assigned concepts,
complemented by a novel loss on cross-attention maps to prevent entanglement.
We also introduce union-sampling, a training strategy aimed to improve the
ability of combining multiple concepts in generated images. We use several
automatic metrics to quantitatively compare our method against several
baselines, and further affirm the results using a user study. Finally, we
showcase several applications of our method. Project page is available at:
https://omriavrahami.com/break-a-scene/
Related papers
- AttenCraft: Attention-guided Disentanglement of Multiple Concepts for Text-to-Image Customization [4.544788024283586]
AttenCraft is an attention-guided method for multiple concept disentanglement.
We introduce Uniform sampling and Reweighted sampling schemes to alleviate the non-synchronicity of feature acquisition from different concepts.
Our method outperforms baseline models in terms of image-alignment, and behaves comparably on text-alignment.
arXiv Detail & Related papers (2024-05-28T08:50:14Z) - FreeCustom: Tuning-Free Customized Image Generation for Multi-Concept Composition [49.2208591663092]
FreeCustom is a tuning-free method to generate customized images of multi-concept composition based on reference concepts.
We introduce a new multi-reference self-attention (MRSA) mechanism and a weighted mask strategy.
Our method outperforms or performs on par with other training-based methods in terms of multi-concept composition and single-concept customization.
arXiv Detail & Related papers (2024-05-22T17:53:38Z) - Concept Weaver: Enabling Multi-Concept Fusion in Text-to-Image Models [85.14042557052352]
We introduce Concept Weaver, a method for composing customized text-to-image diffusion models at inference time.
We show that Concept Weaver can generate multiple custom concepts with higher identity fidelity compared to alternative approaches.
arXiv Detail & Related papers (2024-04-05T06:41:27Z) - Attention Calibration for Disentangled Text-to-Image Personalization [12.339742346826403]
We propose an attention calibration mechanism to improve the concept-level understanding of the T2I model.
We demonstrate that our method outperforms the current state of the art in both qualitative and quantitative evaluations.
arXiv Detail & Related papers (2024-03-27T13:31:39Z) - Visual Concept-driven Image Generation with Text-to-Image Diffusion Model [65.96212844602866]
Text-to-image (TTI) models have demonstrated impressive results in generating high-resolution images of complex scenes.
Recent approaches have extended these methods with personalization techniques that allow them to integrate user-illustrated concepts.
However, the ability to generate images with multiple interacting concepts, such as human subjects, as well as concepts that may be entangled in one, or across multiple, image illustrations remains illusive.
We propose a concept-driven TTI personalization framework that addresses these core challenges.
arXiv Detail & Related papers (2024-02-18T07:28:37Z) - Textual Localization: Decomposing Multi-concept Images for
Subject-Driven Text-to-Image Generation [5.107886283951882]
We introduce a localized text-to-image model to handle multi-concept input images.
Our method incorporates a novel cross-attention guidance to decompose multiple concepts.
Notably, our method generates cross-attention maps consistent with the target concept in the generated images.
arXiv Detail & Related papers (2024-02-15T14:19:42Z) - Multi-Concept T2I-Zero: Tweaking Only The Text Embeddings and Nothing
Else [75.6806649860538]
We consider a more ambitious goal: natural multi-concept generation using a pre-trained diffusion model.
We observe concept dominance and non-localized contribution that severely degrade multi-concept generation performance.
We design a minimal low-cost solution that overcomes the above issues by tweaking the text embeddings for more realistic multi-concept text-to-image generation.
arXiv Detail & Related papers (2023-10-11T12:05:44Z) - Designing an Encoder for Fast Personalization of Text-to-Image Models [57.62449900121022]
We propose an encoder-based domain-tuning approach for text-to-image personalization.
We employ two components: First, an encoder that takes as an input a single image of a target concept from a given domain.
Second, a set of regularized weight-offsets for the text-to-image model that learn how to effectively ingest additional concepts.
arXiv Detail & Related papers (2023-02-23T18:46:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.