Prompt-Learning for Short Text Classification
- URL: http://arxiv.org/abs/2202.11345v1
- Date: Wed, 23 Feb 2022 08:07:06 GMT
- Title: Prompt-Learning for Short Text Classification
- Authors: Yi Zhu, Xinke Zhou, Jipeng Qiang, Yun Li, Yunhao Yuan, Xindong Wu
- Abstract summary: In short text, the extreme short length, feature sparsity and high ambiguity pose huge challenge to classification tasks.
In this paper, we propose a simple short text classification approach that makes use of prompt-learning based on knowledgeable expansion.
- Score: 30.53216712864025
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the short text, the extreme short length, feature sparsity and high
ambiguity pose huge challenge to classification tasks. Recently, as an
effective method for tuning Pre-trained Language Models for specific downstream
tasks, prompt-learning has attracted vast amount of attention and research. The
main intuition behind the prompt-learning is to insert template into the input
and convert the text classification tasks into equivalent cloze-style tasks.
However, most prompt-learning methods expand label words manually or only
consider the class name for knowledge incorporating in cloze-style prediction,
which will inevitably incurred omissions and bias in classification tasks. In
this paper, we propose a simple short text classification approach that makes
use of prompt-learning based on knowledgeable expansion, which can consider
both the short text itself and class name during expanding label words space.
Specifically, the top $N$ concepts related to the entity in short text are
retrieved from the open Knowledge Graph like Probase, and we further refine the
expanded label words by the distance calculation between selected concepts and
class label. Experimental results show that our approach obtains obvious
improvement compared with other fine-tuning, prompt-learning and knowledgeable
prompt-tuning methods, outperforming the state-of-the-art by up to 6 Accuracy
points on three well-known datasets.
Related papers
- SciPrompt: Knowledge-augmented Prompting for Fine-grained Categorization of Scientific Topics [2.3742710594744105]
We introduce SciPrompt, a framework designed to automatically retrieve scientific topic-related terms for low-resource text classification tasks.
Our method outperforms state-of-the-art, prompt-based fine-tuning methods on scientific text classification tasks under few and zero-shot settings.
arXiv Detail & Related papers (2024-10-02T18:45:04Z) - A Novel Prompt-tuning Method: Incorporating Scenario-specific Concepts
into a Verbalizer [15.612761980503658]
We propose a label-word construction process that incorporates scenario-specific concepts.
Specifically, we extract rich concepts from task-specific scenarios as label-word candidates.
We develop a novel cascade calibration module to refine the candidates into a set of label words for each class.
arXiv Detail & Related papers (2024-01-10T15:02:35Z) - Description-Enhanced Label Embedding Contrastive Learning for Text
Classification [65.01077813330559]
Self-Supervised Learning (SSL) in model learning process and design a novel self-supervised Relation of Relation (R2) classification task.
Relation of Relation Learning Network (R2-Net) for text classification, in which text classification and R2 classification are treated as optimization targets.
external knowledge from WordNet to obtain multi-aspect descriptions for label semantic learning.
arXiv Detail & Related papers (2023-06-15T02:19:34Z) - Exploring Structured Semantic Prior for Multi Label Recognition with
Incomplete Labels [60.675714333081466]
Multi-label recognition (MLR) with incomplete labels is very challenging.
Recent works strive to explore the image-to-label correspondence in the vision-language model, ie, CLIP, to compensate for insufficient annotations.
We advocate remedying the deficiency of label supervision for the MLR with incomplete labels by deriving a structured semantic prior.
arXiv Detail & Related papers (2023-03-23T12:39:20Z) - M-Tuning: Prompt Tuning with Mitigated Label Bias in Open-Set Scenarios [103.6153593636399]
We propose a vision-language prompt tuning method with mitigated label bias (M-Tuning)
It introduces open words from the WordNet to extend the range of words forming the prompt texts from only closed-set label words to more, and thus prompts are tuned in a simulated open-set scenario.
Our method achieves the best performance on datasets with various scales, and extensive ablation studies also validate its effectiveness.
arXiv Detail & Related papers (2023-03-09T09:05:47Z) - Task-Specific Embeddings for Ante-Hoc Explainable Text Classification [6.671252951387647]
We propose an alternative training objective in which we learn task-specific embeddings of text.
Our proposed objective learns embeddings such that all texts that share the same target class label should be close together.
We present extensive experiments which show that the benefits of ante-hoc explainability and incremental learning come at no cost in overall classification accuracy.
arXiv Detail & Related papers (2022-11-30T19:56:25Z) - Label Semantic Aware Pre-training for Few-shot Text Classification [53.80908620663974]
We propose Label Semantic Aware Pre-training (LSAP) to improve the generalization and data efficiency of text classification systems.
LSAP incorporates label semantics into pre-trained generative models (T5 in our case) by performing secondary pre-training on labeled sentences from a variety of domains.
arXiv Detail & Related papers (2022-04-14T17:33:34Z) - Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt
Verbalizer for Text Classification [68.3291372168167]
We focus on incorporating external knowledge into the verbalizer, forming a knowledgeable prompt-tuning (KPT)
We expand the label word space of the verbalizer using external knowledge bases (KBs) and refine the expanded label word space with the PLM itself before predicting with the expanded label word space.
Experiments on zero and few-shot text classification tasks demonstrate the effectiveness of knowledgeable prompt-tuning.
arXiv Detail & Related papers (2021-08-04T13:00:16Z) - KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization
for Relation Extraction [111.74812895391672]
We propose a Knowledge-aware Prompt-tuning approach with synergistic optimization (KnowPrompt)
We inject latent knowledge contained in relation labels into prompt construction with learnable virtual type words and answer words.
arXiv Detail & Related papers (2021-04-15T17:57:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.