Keyphrase Prediction With Pre-trained Language Model
- URL: http://arxiv.org/abs/2004.10462v1
- Date: Wed, 22 Apr 2020 09:35:02 GMT
- Title: Keyphrase Prediction With Pre-trained Language Model
- Authors: Rui Liu, Zheng Lin, Weiping Wang
- Abstract summary: We propose to divide the keyphrase prediction into two subtasks, i.e., present keyphrase extraction (PKE) and absent keyphrase generation (AKG)
For PKE, we tackle this task as a sequence labeling problem with the pre-trained language model BERT.
For AKG, we introduce a Transformer-based architecture, which fully integrates the present keyphrase knowledge learned from PKE by the fine-tuned BERT.
- Score: 16.06425973336514
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, generative methods have been widely used in keyphrase prediction,
thanks to their capability to produce both present keyphrases that appear in
the source text and absent keyphrases that do not match any source text.
However, the absent keyphrases are generated at the cost of the performance on
present keyphrase prediction, since previous works mainly use generative models
that rely on the copying mechanism and select words step by step. Besides, the
extractive model that directly extracts a text span is more suitable for
predicting the present keyphrase. Considering the different characteristics of
extractive and generative methods, we propose to divide the keyphrase
prediction into two subtasks, i.e., present keyphrase extraction (PKE) and
absent keyphrase generation (AKG), to fully exploit their respective
advantages. On this basis, a joint inference framework is proposed to make the
most of BERT in two subtasks. For PKE, we tackle this task as a sequence
labeling problem with the pre-trained language model BERT. For AKG, we
introduce a Transformer-based architecture, which fully integrates the present
keyphrase knowledge learned from PKE by the fine-tuned BERT. The experimental
results show that our approach can achieve state-of-the-art results on both
tasks on benchmark datasets.
Related papers
- Pre-Trained Language Models for Keyphrase Prediction: A Review [2.7869482272876622]
Keyphrase Prediction (KP) is essential for identifying keyphrases in a document that can summarize its content.
Recent Natural Language Processing advances have developed more efficient KP models using deep learning techniques.
This paper extensively examines the topic of pre-trained language models for keyphrase prediction (PLM-KP)
arXiv Detail & Related papers (2024-09-02T09:15:44Z) - MetaKP: On-Demand Keyphrase Generation [52.48698290354449]
We introduce on-demand keyphrase generation, a novel paradigm that requires keyphrases that conform to specific high-level goals or intents.
We present MetaKP, a large-scale benchmark comprising four datasets, 7500 documents, and 3760 goals across news and biomedical domains with human-annotated keyphrases.
We demonstrate the potential of our method to serve as a general NLP infrastructure, exemplified by its application in epidemic event detection from social media.
arXiv Detail & Related papers (2024-06-28T19:02:59Z) - SimCKP: Simple Contrastive Learning of Keyphrase Representations [36.88517357720033]
We propose SimCKP, a simple contrastive learning framework that consists of two stages: 1) An extractor-generator that extracts keyphrases by learning context-aware phrase-level representations in a contrastive manner while also generating keyphrases that do not appear in the document; and 2) A reranker that adapts scores for each generated phrase by likewise aligning their representations with the corresponding document.
arXiv Detail & Related papers (2023-10-12T11:11:54Z) - Pre-trained Language Models for Keyphrase Generation: A Thorough
Empirical Study [76.52997424694767]
We present an in-depth empirical study of keyphrase extraction and keyphrase generation using pre-trained language models.
We show that PLMs have competitive high-resource performance and state-of-the-art low-resource performance.
Further results show that in-domain BERT-like PLMs can be used to build strong and data-efficient keyphrase generation models.
arXiv Detail & Related papers (2022-12-20T13:20:21Z) - Representation Learning for Resource-Constrained Keyphrase Generation [78.02577815973764]
We introduce salient span recovery and salient span prediction as guided denoising language modeling objectives.
We show the effectiveness of the proposed approach for low-resource and zero-shot keyphrase generation.
arXiv Detail & Related papers (2022-03-15T17:48:04Z) - Deep Keyphrase Completion [59.0413813332449]
Keyphrase provides accurate information of document content that is highly compact, concise, full of meanings, and widely used for discourse comprehension, organization, and text retrieval.
We propose textitkeyphrase completion (KPC) to generate more keyphrases for document (e.g. scientific publication) taking advantage of document content along with a very limited number of known keyphrases.
We name it textitdeep keyphrase completion (DKPC) since it attempts to capture the deep semantic meaning of the document content together with known keyphrases via a deep learning framework
arXiv Detail & Related papers (2021-10-29T07:15:35Z) - UniKeyphrase: A Unified Extraction and Generation Framework for
Keyphrase Prediction [20.26899340581431]
Keyphrase Prediction task aims at predicting several keyphrases that can summarize the main idea of the given document.
Mainstream KP methods can be categorized into purely generative approaches and integrated models with extraction and generation.
We propose UniKeyphrase, a novel end-to-end learning framework that jointly learns to extract and generate keyphrases.
arXiv Detail & Related papers (2021-06-09T07:09:51Z) - Keyphrase Extraction with Dynamic Graph Convolutional Networks and
Diversified Inference [50.768682650658384]
Keyphrase extraction (KE) aims to summarize a set of phrases that accurately express a concept or a topic covered in a given document.
Recent Sequence-to-Sequence (Seq2Seq) based generative framework is widely used in KE task, and it has obtained competitive performance on various benchmarks.
In this paper, we propose to adopt the Dynamic Graph Convolutional Networks (DGCN) to solve the above two problems simultaneously.
arXiv Detail & Related papers (2020-10-24T08:11:23Z) - Exclusive Hierarchical Decoding for Deep Keyphrase Generation [63.357895318562214]
Keyphrase generation (KG) aims to summarize the main ideas of a document into a set of keyphrases.
Previous work in this setting employs a sequential decoding process to generate keyphrases.
We propose an exclusive hierarchical decoding framework that includes a hierarchical decoding process and either a soft or a hard exclusion mechanism.
arXiv Detail & Related papers (2020-04-18T02:58:00Z) - Keyphrase Extraction with Span-based Feature Representations [13.790461555410747]
Keyphrases are capable of providing semantic metadata characterizing documents.
Three approaches to address keyphrase extraction: (i) traditional two-step ranking method, (ii) sequence labeling and (iii) generation using neural networks.
In this paper, we propose a novelty Span Keyphrase Extraction model that extracts span-based feature representation of keyphrase directly from all the content tokens.
arXiv Detail & Related papers (2020-02-13T09:48:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.