Keyphrase Generation with Fine-Grained Evaluation-Guided Reinforcement
Learning
- URL: http://arxiv.org/abs/2104.08799v1
- Date: Sun, 18 Apr 2021 10:13:46 GMT
- Title: Keyphrase Generation with Fine-Grained Evaluation-Guided Reinforcement
Learning
- Authors: Yichao Luo, Yige Xu, Jiacheng Ye, Xipeng Qiu, Qi Zhang
- Abstract summary: Keyphrase Generation (KG) is a classical task for capturing the central idea from a given document.
In this paper, we propose a new fine-grained evaluation metric that considers different granularity.
For learning more recessive linguistic patterns, we use a pre-trained model (e.g., BERT) to compute the continuous similarity score between predicted keyphrases and target keyphrases.
- Score: 30.09715149060206
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Aiming to generate a set of keyphrases, Keyphrase Generation (KG) is a
classical task for capturing the central idea from a given document. Typically,
traditional KG evaluation metrics are only aware of the exact correctness of
predictions on phrase-level and ignores the semantic similarities between
similar predictions and targets, which inhibits the model from learning deep
linguistic patterns. In this paper, we propose a new fine-grained evaluation
metric that considers different granularity: token-level $F_1$ score, edit
distance, duplication, and prediction quantities. For learning more recessive
linguistic patterns, we use a pre-trained model (e.g., BERT) to compute the
continuous similarity score between predicted keyphrases and target keyphrases.
On the whole, we propose a two-stage Reinforcement Learning (RL) training
framework with two reward functions: our proposed fine-grained evaluation score
and the vanilla $F_1$ score. This framework helps the model identifying some
partial match phrases which can be further optimized as the exact match ones.
Experiments on four KG benchmarks show that our proposed training framework
outperforms the traditional RL training frameworks among all evaluation scores.
In addition, our method can effectively ease the synonym problem and generate a
higher quality prediction.
Related papers
- Adapting Dual-encoder Vision-language Models for Paraphrased Retrieval [55.90407811819347]
We consider the task of paraphrased text-to-image retrieval where a model aims to return similar results given a pair of paraphrased queries.
We train a dual-encoder model starting from a language model pretrained on a large text corpus.
Compared to public dual-encoder models such as CLIP and OpenCLIP, the model trained with our best adaptation strategy achieves a significantly higher ranking similarity for paraphrased queries.
arXiv Detail & Related papers (2024-05-06T06:30:17Z) - Assessing Keyness using Permutation Tests [0.0]
We replace the token-by-token sampling model by a model where corpora are samples of documents rather than tokens.
We do not need any assumption on how the tokens are organized within or across documents, and the approach works with basically *any* keyness score.
arXiv Detail & Related papers (2023-08-25T13:52:57Z) - FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets [69.91340332545094]
We introduce FLASK, a fine-grained evaluation protocol for both human-based and model-based evaluation.
We experimentally observe that the fine-graininess of evaluation is crucial for attaining a holistic view of model performance.
arXiv Detail & Related papers (2023-07-20T14:56:35Z) - Scalable Learning of Latent Language Structure With Logical Offline
Cycle Consistency [71.42261918225773]
Conceptually, LOCCO can be viewed as a form of self-learning where the semantic being trained is used to generate annotations for unlabeled text.
As an added bonus, the annotations produced by LOCCO can be trivially repurposed to train a neural text generation model.
arXiv Detail & Related papers (2023-05-31T16:47:20Z) - Neural Keyphrase Generation: Analysis and Evaluation [47.004575377472285]
We study various tendencies exhibited by three strong models: T5 (based on a pre-trained transformer), CatSeq-Transformer (a non-pretrained Transformer), and ExHiRD (based on a recurrent neural network)
We propose a novel metric framework, SoftKeyScore, to evaluate the similarity between two sets of keyphrases.
arXiv Detail & Related papers (2023-04-27T00:10:21Z) - KPEval: Towards Fine-Grained Semantic-Based Keyphrase Evaluation [69.57018875757622]
We propose KPEval, a comprehensive evaluation framework consisting of four critical aspects: reference agreement, faithfulness, diversity, and utility.
Using KPEval, we re-evaluate 23 keyphrase systems and discover that established model comparison results have blind-spots.
arXiv Detail & Related papers (2023-03-27T17:45:38Z) - Language Models in the Loop: Incorporating Prompting into Weak
Supervision [11.10422546502386]
We propose a new strategy for applying large pre-trained language models to novel tasks when labeled training data is limited.
Instead of applying the model in a typical zero-shot or few-shot fashion, we treat the model as the basis for labeling functions in a weak supervision framework.
arXiv Detail & Related papers (2022-05-04T20:42:40Z) - Pointwise Paraphrase Appraisal is Potentially Problematic [21.06607915149245]
We show that the standard way of fine-tuning BERT for paraphrase identification by pairing two sentences as one sequence results in a model with state-of-the-art performance.
We also show that these models may even predict a pair of randomly-selected sentences with higher paraphrase score than a pair of identical ones.
arXiv Detail & Related papers (2020-05-25T09:27:31Z) - Pre-training Is (Almost) All You Need: An Application to Commonsense
Reasoning [61.32992639292889]
Fine-tuning of pre-trained transformer models has become the standard approach for solving common NLP tasks.
We introduce a new scoring method that casts a plausibility ranking task in a full-text format.
We show that our method provides a much more stable training phase across random restarts.
arXiv Detail & Related papers (2020-04-29T10:54:40Z) - Document Ranking with a Pretrained Sequence-to-Sequence Model [56.44269917346376]
We show how a sequence-to-sequence model can be trained to generate relevance labels as "target words"
Our approach significantly outperforms an encoder-only model in a data-poor regime.
arXiv Detail & Related papers (2020-03-14T22:29:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.