ConES: Concept Embedding Search for Parameter Efficient Tuning Large
Vision Language Models
- URL: http://arxiv.org/abs/2305.18993v1
- Date: Tue, 30 May 2023 12:45:49 GMT
- Title: ConES: Concept Embedding Search for Parameter Efficient Tuning Large
Vision Language Models
- Authors: Huahui Yi, Ziyuan Qin, Wei Xu, Miaotian Guo, Kun Wang, Shaoting Zhang,
Kang Li, Qicheng Lao
- Abstract summary: We propose a Concept Embedding Search (ConES) approach by optimizing prompt embeddings.
By dropping the text encoder, we are able to significantly speed up the learning process.
Our approach can beat the prompt tuning and textual inversion methods in a variety of downstream tasks.
- Score: 21.15548013842187
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large pre-trained vision-language models have shown great prominence in
transferring pre-acquired knowledge to various domains and downstream tasks
with appropriate prompting or tuning. Existing prevalent tuning methods can be
generally categorized into three genres: 1) prompt engineering by creating
suitable prompt texts, which is time-consuming and requires domain expertise;
2) or simply fine-tuning the whole model, which is extremely inefficient; 3)
prompt tuning through parameterized prompt embeddings with the text encoder.
Nevertheless, all methods rely on the text encoder for bridging the modality
gap between vision and language. In this work, we question the necessity of the
cumbersome text encoder for a more lightweight and efficient tuning paradigm as
well as more representative prompt embeddings closer to the image
representations. To achieve this, we propose a Concept Embedding Search (ConES)
approach by optimizing prompt embeddings -- without the need of the text
encoder -- to capture the 'concept' of the image modality through a variety of
task objectives. By dropping the text encoder, we are able to significantly
speed up the learning process, \eg, from about an hour to just ten minutes in
our experiments for personalized text-to-image generation without impairing the
generation quality. Moreover, our proposed approach is orthogonal to current
existing tuning methods since the searched concept embeddings can be further
utilized in the next stage of fine-tuning the pre-trained large models for
boosting performance. Extensive experiments show that our approach can beat the
prompt tuning and textual inversion methods in a variety of downstream tasks
including objection detection, instance segmentation, and image generation. Our
approach also shows better generalization capability for unseen concepts in
specialized domains, such as the medical domain.
Related papers
- TextBoost: Towards One-Shot Personalization of Text-to-Image Models via Fine-tuning Text Encoder [13.695128139074285]
This paper addresses the challenge of one-shot personalization by mitigating overfitting, enabling the creation of controllable images through text prompts.
We introduce three key techniques to enhance personalization performance: (1) augmentation tokens to encourage feature disentanglement and alleviate overfitting, (2) a knowledge-preservation loss to reduce language drift and promote generalizability across diverse prompts, and (3) SNR-weighted sampling for efficient training.
arXiv Detail & Related papers (2024-09-12T17:47:51Z) - Improving Subject-Driven Image Synthesis with Subject-Agnostic Guidance [62.15866177242207]
We show that through constructing a subject-agnostic condition, one could obtain outputs consistent with both the given subject and input text prompts.
Our approach is conceptually simple and requires only minimal code modifications, but leads to substantial quality improvements.
arXiv Detail & Related papers (2024-05-02T15:03:41Z) - Seek for Incantations: Towards Accurate Text-to-Image Diffusion
Synthesis through Prompt Engineering [118.53208190209517]
We propose a framework to learn the proper textual descriptions for diffusion models through prompt learning.
Our method can effectively learn the prompts to improve the matches between the input text and the generated images.
arXiv Detail & Related papers (2024-01-12T03:46:29Z) - Domain-Agnostic Tuning-Encoder for Fast Personalization of Text-To-Image
Models [59.094601993993535]
Text-to-image (T2I) personalization allows users to combine their own visual concepts in natural language prompts.
Most existing encoders are limited to a single-class domain, which hinders their ability to handle diverse concepts.
We propose a domain-agnostic method that does not require any specialized dataset or prior information about the personalized concepts.
arXiv Detail & Related papers (2023-07-13T17:46:42Z) - Towards Unifying Medical Vision-and-Language Pre-training via Soft
Prompts [63.84720380390935]
There exist two typical types, textiti.e., the fusion-encoder type and the dual-encoder type, depending on whether a heavy fusion module is used.
We propose an effective yet straightforward scheme named PTUnifier to unify the two types.
We first unify the input format by introducing visual and textual prompts, which serve as a feature bank that stores the most representative images/texts.
arXiv Detail & Related papers (2023-02-17T15:43:42Z) - Prompt Learning with Optimal Transport for Vision-Language Models [25.928455328563402]
We learn multiple comprehensive prompts to describe diverse characteristics of categories such as intrinsic attributes or extrinsic contexts.
To solve this problem, we propose to apply optimal transport to match the vision and text modalities.
In the inner loop, we optimize the optimal transport distance to align visual features and prompts by the Sinkhorn algorithm, while in the outer loop, we learn the prompts by this distance from the supervised data.
arXiv Detail & Related papers (2022-10-03T22:21:07Z) - Dual Modality Prompt Tuning for Vision-Language Pre-Trained Model [39.722927180264584]
We propose a novel Dual-modality Prompt Tuning (DPT) paradigm through learning text and visual prompts simultaneously.
To make the final image feature concentrate more on the target visual concept, a Class-Aware Visual Prompt Tuning scheme is proposed.
arXiv Detail & Related papers (2022-08-17T15:06:36Z) - Vision-Language Pre-Training for Boosting Scene Text Detectors [57.08046351495244]
We specifically adapt vision-language joint learning for scene text detection.
We propose to learn contextualized, joint representations through vision-language pre-training.
The pre-trained model is able to produce more informative representations with richer semantics.
arXiv Detail & Related papers (2022-04-29T03:53:54Z) - PromptDet: Expand Your Detector Vocabulary with Uncurated Images [47.600059694034]
The goal of this work is to establish a scalable pipeline for expanding an object detector towards novel/unseen categories, using zero manual annotations.
We propose a two-stage open-vocabulary object detector that categorises each box proposal by a classifier generated from the text encoder of a pre-trained visual-language model.
To scale up the learning procedure towards detecting a wider spectrum of objects, we exploit the available online resource, iteratively updating the prompts, and later self-training the proposed detector with pseudo labels generated on a large corpus of noisy, uncurated web images.
arXiv Detail & Related papers (2022-03-30T17:50:21Z) - Learning to Prompt for Vision-Language Models [82.25005817904027]
Vision-language pre-training has emerged as a promising alternative for representation learning.
It shifts from the tradition of using images and discrete labels for learning a fixed set of weights, seen as visual concepts, to aligning images and raw text for two separate encoders.
Such a paradigm benefits from a broader source of supervision and allows zero-shot transfer to downstream tasks.
arXiv Detail & Related papers (2021-09-02T17:57:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.