Prompt Algebra for Task Composition
- URL: http://arxiv.org/abs/2306.00310v1
- Date: Thu, 1 Jun 2023 03:20:54 GMT
- Title: Prompt Algebra for Task Composition
- Authors: Pramuditha Perera, Matthew Trager, Luca Zancato, Alessandro Achille,
Stefano Soatto
- Abstract summary: We consider Visual Language Models with prompt tuning as our base classifier.
We propose constrained prompt tuning to improve performance of the composite classifier.
On UTZappos it improves classification accuracy over the best base model by 8.45% on average.
- Score: 131.97623832435812
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We investigate whether prompts learned independently for different tasks can
be later combined through prompt algebra to obtain a model that supports
composition of tasks. We consider Visual Language Models (VLM) with prompt
tuning as our base classifier and formally define the notion of prompt algebra.
We propose constrained prompt tuning to improve performance of the composite
classifier. In the proposed scheme, prompts are constrained to appear in the
lower dimensional subspace spanned by the basis vectors of the pre-trained
vocabulary. Further regularization is added to ensure that the learned prompt
is grounded correctly to the existing pre-trained vocabulary. We demonstrate
the effectiveness of our method on object classification and object-attribute
classification datasets. On average, our composite model obtains classification
accuracy within 2.5% of the best base model. On UTZappos it improves
classification accuracy over the best base model by 8.45% on average.
Related papers
- A Fixed-Point Approach to Unified Prompt-Based Counting [51.20608895374113]
This paper aims to establish a comprehensive prompt-based counting framework capable of generating density maps for objects indicated by various prompt types, such as box, point, and text.
Our model excels in prominent class-agnostic datasets and exhibits superior performance in cross-dataset adaptation tasks.
arXiv Detail & Related papers (2024-03-15T12:05:44Z) - ProTeCt: Prompt Tuning for Taxonomic Open Set Classification [59.59442518849203]
Few-shot adaptation methods do not fare well in the taxonomic open set (TOS) setting.
We propose a prompt tuning technique that calibrates the hierarchical consistency of model predictions.
A new Prompt Tuning for Hierarchical Consistency (ProTeCt) technique is then proposed to calibrate classification across label set granularities.
arXiv Detail & Related papers (2023-06-04T02:55:25Z) - Class-Incremental Learning with Strong Pre-trained Models [97.84755144148535]
Class-incremental learning (CIL) has been widely studied under the setting of starting from a small number of classes (base classes)
We explore an understudied real-world setting of CIL that starts with a strong model pre-trained on a large number of base classes.
Our proposed method is robust and generalizes to all analyzed CIL settings.
arXiv Detail & Related papers (2022-04-07T17:58:07Z) - Multiple Classifiers Based Maximum Classifier Discrepancy for
Unsupervised Domain Adaptation [25.114533037440896]
We propose to extend the structure of two classifiers to multiple classifiers to further boost its performance.
We demonstrate that, on average, adopting the structure of three classifiers normally yields the best performance as a trade-off between the accuracy and efficiency.
arXiv Detail & Related papers (2021-08-02T03:00:13Z) - Optimizing Black-box Metrics with Iterative Example Weighting [32.682652530189266]
We consider learning to optimize a classification metric defined by a black-box function of the confusion matrix.
Our approach is to adaptively learn example weights on the training dataset such that the resulting weighted objective best approximates the metric on the validation sample.
arXiv Detail & Related papers (2021-02-18T17:19:09Z) - Learning Better Sentence Representation with Syntax Information [0.0]
We propose a novel approach to combining syntax information with a pre-trained language model.
Our model achieves 91.2% accuracy, outperforming the baseline model by 37.8% on sentence completion task.
arXiv Detail & Related papers (2021-01-09T12:15:08Z) - Automated Concatenation of Embeddings for Structured Prediction [75.44925576268052]
We propose Automated Concatenation of Embeddings (ACE) to automate the process of finding better concatenations of embeddings for structured prediction tasks.
We follow strategies in reinforcement learning to optimize the parameters of the controller and compute the reward based on the accuracy of a task model.
arXiv Detail & Related papers (2020-10-10T14:03:20Z) - Pre-training Is (Almost) All You Need: An Application to Commonsense
Reasoning [61.32992639292889]
Fine-tuning of pre-trained transformer models has become the standard approach for solving common NLP tasks.
We introduce a new scoring method that casts a plausibility ranking task in a full-text format.
We show that our method provides a much more stable training phase across random restarts.
arXiv Detail & Related papers (2020-04-29T10:54:40Z) - Diversity-Aware Weighted Majority Vote Classifier for Imbalanced Data [1.2944868613449219]
We propose a diversity-aware ensemble learning based algorithm, DAMVI, to deal with imbalanced binary classification tasks.
We show efficiency of the proposed approach with respect to state-of-art models on predictive maintenance task, credit card fraud detection, webpage classification and medical applications.
arXiv Detail & Related papers (2020-04-16T11:27:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.