PTR: Prompt Tuning with Rules for Text Classification
- URL: http://arxiv.org/abs/2105.11259v1
- Date: Mon, 24 May 2021 13:24:02 GMT
- Title: PTR: Prompt Tuning with Rules for Text Classification
- Authors: Xu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu, Maosong Sun
- Abstract summary: Fine-tuned pre-trained language models (PLMs) have achieved awesome performance on almost all NLP tasks.
We propose prompt tuning with rules (PTR) for many-class text classification.
PTR is able to encode prior knowledge of each class into prompt tuning.
- Score: 64.1655047016891
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fine-tuned pre-trained language models (PLMs) have achieved awesome
performance on almost all NLP tasks. By using additional prompts to fine-tune
PLMs, we can further stimulate the rich knowledge distributed in PLMs to better
serve downstream task. Prompt tuning has achieved promising results on some
few-class classification tasks such as sentiment classification and natural
language inference. However, manually designing lots of language prompts is
cumbersome and fallible. For those auto-generated prompts, it is also expensive
and time-consuming to verify their effectiveness in non-few-shot scenarios.
Hence, it is challenging for prompt tuning to address many-class classification
tasks. To this end, we propose prompt tuning with rules (PTR) for many-class
text classification, and apply logic rules to construct prompts with several
sub-prompts. In this way, PTR is able to encode prior knowledge of each class
into prompt tuning. We conduct experiments on relation classification, a
typical many-class classification task, and the results on benchmarks show that
PTR can significantly and consistently outperform existing state-of-the-art
baselines. This indicates that PTR is a promising approach to take advantage of
PLMs for those complicated classification tasks.
Related papers
- Less is more: Summarizing Patch Tokens for efficient Multi-Label Class-Incremental Learning [38.36863497458095]
We propose a new class-incremental learning method Multi-Label class incremental learning via summarising pAtch tokeN Embeddings (MULTI-LANE)
Our proposed method Multi-Label class incremental learning via summarising pAtch tokeN Embeddings (MULTI-LANE) enables learning disentangled task-specific representations in MLCIL while ensuring fast inference.
arXiv Detail & Related papers (2024-05-24T15:18:27Z) - RulePrompt: Weakly Supervised Text Classification with Prompting PLMs and Self-Iterative Logical Rules [30.239044569301534]
Weakly supervised text classification (WSTC) has attracted increasing attention due to its applicability in classifying a mass of texts.
We propose a prompting PLM-based approach named RulePrompt for the WSTC task, consisting of a rule mining module and a rule-enhanced pseudo label generation module.
Our approach yields interpretable category rules, proving its advantage in disambiguating easily-confused categories.
arXiv Detail & Related papers (2024-03-05T12:50:36Z) - TCP:Textual-based Class-aware Prompt tuning for Visual-Language Model [78.77544632773404]
We present a Textual-based Class-aware Prompt tuning( TCP) that explicitly incorporates prior knowledge about classes to enhance their discriminability.
TCP consistently achieves superior performance while demanding less training time.
arXiv Detail & Related papers (2023-11-30T03:59:23Z) - Instance-wise Prompt Tuning for Pretrained Language Models [72.74916121511662]
Instance-wise Prompt Tuning (IPT) is the first prompt learning paradigm that injects knowledge from the input data instances to the prompts.
IPT significantly outperforms task-based prompt learning methods, and achieves comparable performance to conventional finetuning with only 0.5% - 1.5% of tuned parameters.
arXiv Detail & Related papers (2022-06-04T10:08:50Z) - Prompt Tuning for Discriminative Pre-trained Language Models [96.04765512463415]
Recent works have shown promising results of prompt tuning in stimulating pre-trained language models (PLMs) for natural language processing (NLP) tasks.
It is still unknown whether and how discriminative PLMs, e.g., ELECTRA, can be effectively prompt-tuned.
We present DPT, the first prompt tuning framework for discriminative PLMs, which reformulates NLP tasks into a discriminative language modeling problem.
arXiv Detail & Related papers (2022-05-23T10:11:50Z) - PromptDA: Label-guided Data Augmentation for Prompt-based Few-shot
Learners [15.130992223266734]
We propose a novel label-guided data augmentation framework, PromptDA, which exploits the enriched label semantic information for data augmentation.
Our experiment results on few-shot text classification tasks demonstrate the superior performance of the proposed framework.
arXiv Detail & Related papers (2022-05-18T22:15:20Z) - Towards Unified Prompt Tuning for Few-shot Text Classification [47.71344780587704]
We present the Unified Prompt Tuning (UPT) framework, leading to better few-shot text classification for BERT-style models.
In UPT, a novel paradigm Prompt-Options-Verbalizer is proposed for joint prompt learning across different NLP tasks.
We also design a self-supervised task named Knowledge-enhanced Selective Masked Language Modeling to improve the PLM's generalization abilities.
arXiv Detail & Related papers (2022-05-11T07:40:45Z) - IDPG: An Instance-Dependent Prompt Generation Method [58.45110542003139]
Prompt tuning is a new, efficient NLP transfer learning paradigm that adds a task-specific prompt in each input instance during the model training stage.
We propose a conditional prompt generation method to generate prompts for each input instance.
arXiv Detail & Related papers (2022-04-09T15:45:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.