A Few-shot Approach to Resume Information Extraction via Prompts
- URL: http://arxiv.org/abs/2209.09450v2
- Date: Sat, 20 May 2023 03:18:19 GMT
- Title: A Few-shot Approach to Resume Information Extraction via Prompts
- Authors: Chengguang Gan, Tatsunori Mori
- Abstract summary: This paper applies prompt learning to resume information extraction.
We create manual templates and verbalizers tailored to resume texts.
We present the Manual Knowledgeable Verbalizer (MKV), a rule for constructing verbalizers for specific applications.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Prompt learning's fine-tune performance on text classification tasks has
attracted the NLP community. This paper applies it to resume information
extraction, improving existing methods for this task. We created manual
templates and verbalizers tailored to resume texts and compared the performance
of Masked Language Model (MLM) and Seq2Seq PLMs. Also, we enhanced the
verbalizer design for Knowledgeable Prompt-tuning, contributing to prompt
template design across NLP tasks. We present the Manual Knowledgeable
Verbalizer (MKV), a rule for constructing verbalizers for specific
applications. Our tests show that MKV rules yield more effective, robust
templates and verbalizers than existing methods. Our MKV approach resolved
sample imbalance, surpassing current automatic prompt methods. This study
underscores the value of tailored prompt learning for resume extraction,
stressing the importance of custom-designed templates and verbalizers.
Related papers
- Optimising Hard Prompts with Few-Shot Meta-Prompting [0.0]
Contextual prompts include context in the form of a document or dialogue along with the natural language instructions to the Large Language Model (LLM)
Masking the context, it acts as template for prompts.
In this paper, we present an iterative method to generate better templates using an LLM from an existing set of prompt templates without revealing the context to the LLM.
arXiv Detail & Related papers (2024-07-09T07:02:57Z) - MetricPrompt: Prompting Model as a Relevance Metric for Few-shot Text
Classification [65.51149771074944]
MetricPrompt eases verbalizer design difficulty by reformulating few-shot text classification task into text pair relevance estimation task.
We conduct experiments on three widely used text classification datasets across four few-shot settings.
Results show that MetricPrompt outperforms manual verbalizer and other automatic verbalizer design methods across all few-shot settings.
arXiv Detail & Related papers (2023-06-15T06:51:35Z) - Automated Few-shot Classification with Instruction-Finetuned Language
Models [76.69064714392165]
We show that AuT-Few outperforms state-of-the-art few-shot learning methods.
We also show that AuT-Few is the best ranking method across datasets on the RAFT few-shot benchmark.
arXiv Detail & Related papers (2023-05-21T21:50:27Z) - Towards Unified Prompt Tuning for Few-shot Text Classification [47.71344780587704]
We present the Unified Prompt Tuning (UPT) framework, leading to better few-shot text classification for BERT-style models.
In UPT, a novel paradigm Prompt-Options-Verbalizer is proposed for joint prompt learning across different NLP tasks.
We also design a self-supervised task named Knowledge-enhanced Selective Masked Language Modeling to improve the PLM's generalization abilities.
arXiv Detail & Related papers (2022-05-11T07:40:45Z) - AdaPrompt: Adaptive Model Training for Prompt-based NLP [77.12071707955889]
We propose AdaPrompt, adaptively retrieving external data for continual pretraining of PLMs.
Experimental results on five NLP benchmarks show that AdaPrompt can improve over standard PLMs in few-shot settings.
In zero-shot settings, our method outperforms standard prompt-based methods by up to 26.35% relative error reduction.
arXiv Detail & Related papers (2022-02-10T04:04:57Z) - Context-Tuning: Learning Contextualized Prompts for Natural Language
Generation [52.835877179365525]
We propose a novel continuous prompting approach, called Context-Tuning, to fine-tuning PLMs for natural language generation.
Firstly, the prompts are derived based on the input text, so that they can elicit useful knowledge from PLMs for generation.
Secondly, to further enhance the relevance of the generated text to the inputs, we utilize continuous inverse prompting to refine the process of natural language generation.
arXiv Detail & Related papers (2022-01-21T12:35:28Z) - Prompt-Learning for Fine-Grained Entity Typing [40.983849729537795]
We investigate the application of prompt-learning on fine-grained entity typing in fully supervised, few-shot and zero-shot scenarios.
We propose a self-supervised strategy that carries out distribution-level optimization in prompt-learning to automatically summarize the information of entity types.
arXiv Detail & Related papers (2021-08-24T09:39:35Z) - AutoPrompt: Eliciting Knowledge from Language Models with Automatically
Generated Prompts [46.03503882865222]
AutoPrompt is an automated method to create prompts for a diverse set of tasks based on a gradient-guided search.
We show that masked language models (MLMs) have an inherent capability to perform sentiment analysis and natural language inference without additional parameters or finetuning.
arXiv Detail & Related papers (2020-10-29T22:54:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.