Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt
Verbalizer for Text Classification
- URL: http://arxiv.org/abs/2108.02035v1
- Date: Wed, 4 Aug 2021 13:00:16 GMT
- Title: Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt
Verbalizer for Text Classification
- Authors: Shengding Hu, Ning Ding, Huadong Wang, Zhiyuan Liu, Juanzi Li and
Maosong Sun
- Abstract summary: We focus on incorporating external knowledge into the verbalizer, forming a knowledgeable prompt-tuning (KPT)
We expand the label word space of the verbalizer using external knowledge bases (KBs) and refine the expanded label word space with the PLM itself before predicting with the expanded label word space.
Experiments on zero and few-shot text classification tasks demonstrate the effectiveness of knowledgeable prompt-tuning.
- Score: 68.3291372168167
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Tuning pre-trained language models (PLMs) with task-specific prompts has been
a promising approach for text classification. Particularly, previous studies
suggest that prompt-tuning has remarkable superiority in the low-data scenario
over the generic fine-tuning methods with extra classifiers. The core idea of
prompt-tuning is to insert text pieces, i.e., template, to the input and
transform a classification problem into a masked language modeling problem,
where a crucial step is to construct a projection, i.e., verbalizer, between a
label space and a label word space. A verbalizer is usually handcrafted or
searched by gradient descent, which may lack coverage and bring considerable
bias and high variances to the results. In this work, we focus on incorporating
external knowledge into the verbalizer, forming a knowledgeable prompt-tuning
(KPT), to improve and stabilize prompt-tuning. Specifically, we expand the
label word space of the verbalizer using external knowledge bases (KBs) and
refine the expanded label word space with the PLM itself before predicting with
the expanded label word space. Extensive experiments on zero and few-shot text
classification tasks demonstrate the effectiveness of knowledgeable
prompt-tuning.
Related papers
- SciPrompt: Knowledge-augmented Prompting for Fine-grained Categorization of Scientific Topics [2.3742710594744105]
We introduce SciPrompt, a framework designed to automatically retrieve scientific topic-related terms for low-resource text classification tasks.
Our method outperforms state-of-the-art, prompt-based fine-tuning methods on scientific text classification tasks under few and zero-shot settings.
arXiv Detail & Related papers (2024-10-02T18:45:04Z) - Description-Enhanced Label Embedding Contrastive Learning for Text
Classification [65.01077813330559]
Self-Supervised Learning (SSL) in model learning process and design a novel self-supervised Relation of Relation (R2) classification task.
Relation of Relation Learning Network (R2-Net) for text classification, in which text classification and R2 classification are treated as optimization targets.
external knowledge from WordNet to obtain multi-aspect descriptions for label semantic learning.
arXiv Detail & Related papers (2023-06-15T02:19:34Z) - M-Tuning: Prompt Tuning with Mitigated Label Bias in Open-Set Scenarios [103.6153593636399]
We propose a vision-language prompt tuning method with mitigated label bias (M-Tuning)
It introduces open words from the WordNet to extend the range of words forming the prompt texts from only closed-set label words to more, and thus prompts are tuned in a simulated open-set scenario.
Our method achieves the best performance on datasets with various scales, and extensive ablation studies also validate its effectiveness.
arXiv Detail & Related papers (2023-03-09T09:05:47Z) - CCPrefix: Counterfactual Contrastive Prefix-Tuning for Many-Class
Classification [57.62886091828512]
We propose a brand-new prefix-tuning method, Counterfactual Contrastive Prefix-tuning (CCPrefix) for many-class classification.
Basically, an instance-dependent soft prefix, derived from fact-counterfactual pairs in the label space, is leveraged to complement the language verbalizers in many-class classification.
arXiv Detail & Related papers (2022-11-11T03:45:59Z) - PromptDA: Label-guided Data Augmentation for Prompt-based Few-shot
Learners [15.130992223266734]
We propose a novel label-guided data augmentation framework, PromptDA, which exploits the enriched label semantic information for data augmentation.
Our experiment results on few-shot text classification tasks demonstrate the superior performance of the proposed framework.
arXiv Detail & Related papers (2022-05-18T22:15:20Z) - Prompt-Learning for Short Text Classification [30.53216712864025]
In short text, the extreme short length, feature sparsity and high ambiguity pose huge challenge to classification tasks.
In this paper, we propose a simple short text classification approach that makes use of prompt-learning based on knowledgeable expansion.
arXiv Detail & Related papers (2022-02-23T08:07:06Z) - Eliciting Knowledge from Pretrained Language Models for Prototypical
Prompt Verbalizer [12.596033546002321]
In this paper, we focus on eliciting knowledge from pretrained language models and propose a prototypical prompt verbalizer for prompt-tuning.
For zero-shot settings, knowledge is elicited from pretrained language models by a manually designed template to form initial prototypical embeddings.
For few-shot settings, models are tuned to learn meaningful and interpretable prototypical embeddings.
arXiv Detail & Related papers (2022-01-14T12:04:37Z) - UCPhrase: Unsupervised Context-aware Quality Phrase Tagging [63.86606855524567]
UCPhrase is a novel unsupervised context-aware quality phrase tagger.
We induce high-quality phrase spans as silver labels from consistently co-occurring word sequences.
We show that our design is superior to state-of-the-art pre-trained, unsupervised, and distantly supervised methods.
arXiv Detail & Related papers (2021-05-28T19:44:24Z) - KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization
for Relation Extraction [111.74812895391672]
We propose a Knowledge-aware Prompt-tuning approach with synergistic optimization (KnowPrompt)
We inject latent knowledge contained in relation labels into prompt construction with learnable virtual type words and answer words.
arXiv Detail & Related papers (2021-04-15T17:57:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.