Designing an Encoder for Fast Personalization of Text-to-Image Models
- URL: http://arxiv.org/abs/2302.12228v2
- Date: Sun, 26 Feb 2023 18:59:29 GMT
- Title: Designing an Encoder for Fast Personalization of Text-to-Image Models
- Authors: Rinon Gal, Moab Arar, Yuval Atzmon, Amit H. Bermano, Gal Chechik,
Daniel Cohen-Or
- Abstract summary: We propose an encoder-based domain-tuning approach for text-to-image personalization.
We employ two components: First, an encoder that takes as an input a single image of a target concept from a given domain.
Second, a set of regularized weight-offsets for the text-to-image model that learn how to effectively ingest additional concepts.
- Score: 57.62449900121022
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Text-to-image personalization aims to teach a pre-trained diffusion model to
reason about novel, user provided concepts, embedding them into new scenes
guided by natural language prompts. However, current personalization approaches
struggle with lengthy training times, high storage requirements or loss of
identity. To overcome these limitations, we propose an encoder-based
domain-tuning approach. Our key insight is that by underfitting on a large set
of concepts from a given domain, we can improve generalization and create a
model that is more amenable to quickly adding novel concepts from the same
domain. Specifically, we employ two components: First, an encoder that takes as
an input a single image of a target concept from a given domain, e.g. a
specific face, and learns to map it into a word-embedding representing the
concept. Second, a set of regularized weight-offsets for the text-to-image
model that learn how to effectively ingest additional concepts. Together, these
components are used to guide the learning of unseen concepts, allowing us to
personalize a model using only a single image and as few as 5 training steps -
accelerating personalization from dozens of minutes to seconds, while
preserving quality.
Related papers
- AttenCraft: Attention-guided Disentanglement of Multiple Concepts for Text-to-Image Customization [4.544788024283586]
AttenCraft is an attention-guided method for multiple concept disentanglement.
We introduce Uniform sampling and Reweighted sampling schemes to alleviate the non-synchronicity of feature acquisition from different concepts.
Our method outperforms baseline models in terms of image-alignment, and behaves comparably on text-alignment.
arXiv Detail & Related papers (2024-05-28T08:50:14Z) - FreeCustom: Tuning-Free Customized Image Generation for Multi-Concept Composition [49.2208591663092]
FreeCustom is a tuning-free method to generate customized images of multi-concept composition based on reference concepts.
We introduce a new multi-reference self-attention (MRSA) mechanism and a weighted mask strategy.
Our method outperforms or performs on par with other training-based methods in terms of multi-concept composition and single-concept customization.
arXiv Detail & Related papers (2024-05-22T17:53:38Z) - Textual Localization: Decomposing Multi-concept Images for
Subject-Driven Text-to-Image Generation [5.107886283951882]
We introduce a localized text-to-image model to handle multi-concept input images.
Our method incorporates a novel cross-attention guidance to decompose multiple concepts.
Notably, our method generates cross-attention maps consistent with the target concept in the generated images.
arXiv Detail & Related papers (2024-02-15T14:19:42Z) - CatVersion: Concatenating Embeddings for Diffusion-Based Text-to-Image
Personalization [56.892032386104006]
CatVersion is an inversion-based method that learns the personalized concept through a handful of examples.
Users can utilize text prompts to generate images that embody the personalized concept.
arXiv Detail & Related papers (2023-11-24T17:55:10Z) - Multi-Concept T2I-Zero: Tweaking Only The Text Embeddings and Nothing
Else [75.6806649860538]
We consider a more ambitious goal: natural multi-concept generation using a pre-trained diffusion model.
We observe concept dominance and non-localized contribution that severely degrade multi-concept generation performance.
We design a minimal low-cost solution that overcomes the above issues by tweaking the text embeddings for more realistic multi-concept text-to-image generation.
arXiv Detail & Related papers (2023-10-11T12:05:44Z) - Domain-Agnostic Tuning-Encoder for Fast Personalization of Text-To-Image
Models [59.094601993993535]
Text-to-image (T2I) personalization allows users to combine their own visual concepts in natural language prompts.
Most existing encoders are limited to a single-class domain, which hinders their ability to handle diverse concepts.
We propose a domain-agnostic method that does not require any specialized dataset or prior information about the personalized concepts.
arXiv Detail & Related papers (2023-07-13T17:46:42Z) - Break-A-Scene: Extracting Multiple Concepts from a Single Image [80.47666266017207]
We introduce the task of textual scene decomposition.
We propose augmenting the input image with masks that indicate the presence of target concepts.
We then present a novel two-phase customization process.
arXiv Detail & Related papers (2023-05-25T17:59:04Z) - InstantBooth: Personalized Text-to-Image Generation without Test-Time
Finetuning [20.127745565621616]
We propose InstantBooth, a novel approach built upon pre-trained text-to-image models.
Our model can generate competitive results on unseen concepts concerning language-image alignment, image fidelity, and identity preservation.
arXiv Detail & Related papers (2023-04-06T23:26:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.