Learning Label Modular Prompts for Text Classification in the Wild
- URL: http://arxiv.org/abs/2211.17142v1
- Date: Wed, 30 Nov 2022 16:26:38 GMT
- Title: Learning Label Modular Prompts for Text Classification in the Wild
- Authors: Hailin Chen, Amrita Saha, Shafiq Joty, Steven C.H. Hoi
- Abstract summary: We propose text classification in-the-wild, which introduces different non-stationary training/testing stages.
Decomposing a complex task into modular components can enable robust generalisation under such non-stationary environment.
We propose MODULARPROMPT, a label-modular prompt tuning framework for text classification tasks.
- Score: 56.66187728534808
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning models usually assume i.i.d data during training and
testing, but data and tasks in real world often change over time. To emulate
the transient nature of real world, we propose a challenging but practical
task: text classification in-the-wild, which introduces different
non-stationary training/testing stages. Decomposing a complex task into modular
components can enable robust generalisation under such non-stationary
environment. However, current modular approaches in NLP do not take advantage
of recent advances in parameter efficient tuning of pretrained language models.
To close this gap, we propose MODULARPROMPT, a label-modular prompt tuning
framework for text classification tasks. In MODULARPROMPT, the input prompt
consists of a sequence of soft label prompts, each encoding modular knowledge
related to the corresponding class label. In two of most formidable settings,
MODULARPROMPT outperforms relevant baselines by a large margin demonstrating
strong generalisation ability. We also conduct comprehensive analysis to
validate whether the learned prompts satisfy properties of a modular
representation.
Related papers
- Mixture of Prompt Learning for Vision Language Models [12.828490399811376]
We propose a mixture of soft prompt learning method incorporating a routing module.
This module is able to capture a dataset's varied styles and dynamically selects the most suitable prompts for each instance.
We also implement semantically grouped text-level supervision, initializing each soft prompt with the token embeddings of manually designed templates from its group.
arXiv Detail & Related papers (2024-09-18T14:25:02Z) - Adapting Vision-Language Models to Open Classes via Test-Time Prompt Tuning [50.26965628047682]
Adapting pre-trained models to open classes is a challenging problem in machine learning.
In this paper, we consider combining the advantages of both and come up with a test-time prompt tuning approach.
Our proposed method outperforms all comparison methods on average considering both base and new classes.
arXiv Detail & Related papers (2024-08-29T12:34:01Z) - Is Modularity Transferable? A Case Study through the Lens of Knowledge Distillation [59.37775534633868]
We present an extremely straightforward approach to transferring pre-trained, task-specific PEFT modules between same-family PLMs.
We also propose a method that allows the transfer of modules between incompatible PLMs without any change in the inference complexity.
arXiv Detail & Related papers (2024-03-27T17:50:00Z) - CFPL-FAS: Class Free Prompt Learning for Generalizable Face Anti-spoofing [66.6712018832575]
Domain generalization (DG) based Face Anti-Spoofing (FAS) aims to improve the model's performance on unseen domains.
We make use of large-scale VLMs like CLIP and leverage the textual feature to dynamically adjust the classifier's weights for exploring generalizable visual features.
arXiv Detail & Related papers (2024-03-21T11:58:50Z) - On Conditional and Compositional Language Model Differentiable Prompting [75.76546041094436]
Prompts have been shown to be an effective method to adapt a frozen Pretrained Language Model (PLM) to perform well on downstream tasks.
We propose a new model, Prompt Production System (PRopS), which learns to transform task instructions or input metadata, into continuous prompts.
arXiv Detail & Related papers (2023-07-04T02:47:42Z) - Gradient-Regulated Meta-Prompt Learning for Generalizable
Vision-Language Models [137.74524357614285]
We introduce a novel Gradient-RegulAted Meta-prompt learning framework.
It helps pre-training models adapt to downstream tasks in a parameter -- and data -- efficient way.
GRAM can be easily incorporated into various prompt tuning methods in a model-agnostic way.
arXiv Detail & Related papers (2023-03-12T05:03:37Z) - Truth-Conditional Captioning of Time Series Data [34.65925116012727]
We explore the task of automatically generating natural language descriptions of salient patterns in a time series.
A model for this task should be able to extract high-level patterns such as presence of a peak or a dip.
We propose a computational model with a truth-conditional architecture which first runs small learned programs on the input time series.
We find that the proposed model is able to generate high-precision captions even though we consider a small and simple space of module types.
arXiv Detail & Related papers (2021-10-05T06:28:37Z) - Prompt-Learning for Fine-Grained Entity Typing [40.983849729537795]
We investigate the application of prompt-learning on fine-grained entity typing in fully supervised, few-shot and zero-shot scenarios.
We propose a self-supervised strategy that carries out distribution-level optimization in prompt-learning to automatically summarize the information of entity types.
arXiv Detail & Related papers (2021-08-24T09:39:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.