Entailment Relation Aware Paraphrase Generation
- URL: http://arxiv.org/abs/2203.10483v1
- Date: Sun, 20 Mar 2022 08:02:09 GMT
- Title: Entailment Relation Aware Paraphrase Generation
- Authors: Abhilasha Sancheti, Balaji Vasan Srinivasan, Rachel Rudinger
- Abstract summary: We propose a reinforcement learning-based weakly-supervised paraphrasing system, ERAP, that can be trained using existing paraphrase and natural language inference corpora.
A combination of automated and human evaluations show that ERAP generates paraphrases conforming to the specified entailment relation.
- Score: 17.6146622291895
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a new task of entailment relation aware paraphrase generation
which aims at generating a paraphrase conforming to a given entailment relation
(e.g. equivalent, forward entailing, or reverse entailing) with respect to a
given input. We propose a reinforcement learning-based weakly-supervised
paraphrasing system, ERAP, that can be trained using existing paraphrase and
natural language inference (NLI) corpora without an explicit task-specific
corpus. A combination of automated and human evaluations show that ERAP
generates paraphrases conforming to the specified entailment relation and are
of good quality as compared to the baselines and uncontrolled paraphrasing
systems. Using ERAP for augmenting training data for downstream textual
entailment task improves performance over an uncontrolled paraphrasing system,
and introduces fewer training artifacts, indicating the benefit of explicit
control during paraphrasing.
Related papers
- Prompt-based Logical Semantics Enhancement for Implicit Discourse
Relation Recognition [4.7938839332508945]
We propose a Prompt-based Logical Semantics Enhancement (PLSE) method for Implicit Discourse Relation Recognition (IDRR)
Our method seamlessly injects knowledge relevant to discourse relation into pre-trained language models through prompt-based connective prediction.
Experimental results on PDTB 2.0 and CoNLL16 datasets demonstrate that our method achieves outstanding and consistent performance against the current state-of-the-art models.
arXiv Detail & Related papers (2023-11-01T08:38:08Z) - Factually Consistent Summarization via Reinforcement Learning with
Textual Entailment Feedback [57.816210168909286]
We leverage recent progress on textual entailment models to address this problem for abstractive summarization systems.
We use reinforcement learning with reference-free, textual entailment rewards to optimize for factual consistency.
Our results, according to both automatic metrics and human evaluation, show that our method considerably improves the faithfulness, salience, and conciseness of the generated summaries.
arXiv Detail & Related papers (2023-05-31T21:04:04Z) - Unsupervised Syntactically Controlled Paraphrase Generation with
Abstract Meaning Representations [59.10748929158525]
Abstract Representations (AMR) can greatly improve the performance of unsupervised syntactically controlled paraphrase generation.
Our proposed model, AMR-enhanced Paraphrase Generator (AMRPG), encodes the AMR graph and the constituency parses the input sentence into two disentangled semantic and syntactic embeddings.
Experiments show that AMRPG generates more accurate syntactically controlled paraphrases, both quantitatively and qualitatively, compared to the existing unsupervised approaches.
arXiv Detail & Related papers (2022-11-02T04:58:38Z) - Sentence Representation Learning with Generative Objective rather than
Contrastive Objective [86.01683892956144]
We propose a novel generative self-supervised learning objective based on phrase reconstruction.
Our generative learning achieves powerful enough performance improvement and outperforms the current state-of-the-art contrastive methods.
arXiv Detail & Related papers (2022-10-16T07:47:46Z) - Improving Distantly Supervised Relation Extraction by Natural Language
Inference [9.181270251524866]
We propose a novel DSRE-NLI framework, which considers both distant supervision from existing knowledge bases and indirect supervision from pretrained language models for other tasks.
DSRE-NLI energizes an off-the-shelf natural language inference (NLI) engine with a semi-automatic relation verbalization (SARV) mechanism to provide indirect supervision.
With two simple and effective data consolidation strategies, the quality of training data is substantially improved.
arXiv Detail & Related papers (2022-07-31T02:48:34Z) - HiURE: Hierarchical Exemplar Contrastive Learning for Unsupervised
Relation Extraction [60.80849503639896]
Unsupervised relation extraction aims to extract the relationship between entities from natural language sentences without prior information on relational scope or distribution.
We propose a novel contrastive learning framework named HiURE, which has the capability to derive hierarchical signals from relational feature space using cross hierarchy attention.
Experimental results on two public datasets demonstrate the advanced effectiveness and robustness of HiURE on unsupervised relation extraction when compared with state-of-the-art models.
arXiv Detail & Related papers (2022-05-04T17:56:48Z) - Generative or Contrastive? Phrase Reconstruction for Better Sentence
Representation Learning [86.01683892956144]
We propose a novel generative self-supervised learning objective based on phrase reconstruction.
Our generative learning may yield powerful enough sentence representation and achieve performance in Sentence Textual Similarity tasks on par with contrastive learning.
arXiv Detail & Related papers (2022-04-20T10:00:46Z) - Improving Human-Object Interaction Detection via Phrase Learning and
Label Composition [14.483347746239055]
Human-Object Interaction (HOI) detection is a fundamental task in high-level human-centric scene understanding.
We propose PhraseHOI, containing a HOI branch and a novel phrase branch, to leverage language prior and improve relation expression.
arXiv Detail & Related papers (2021-12-14T13:22:16Z) - Learning to Selectively Learn for Weakly-supervised Paraphrase
Generation [81.65399115750054]
We propose a novel approach to generate high-quality paraphrases with weak supervision data.
Specifically, we tackle the weakly-supervised paraphrase generation problem by:.
obtaining abundant weakly-labeled parallel sentences via retrieval-based pseudo paraphrase expansion.
We demonstrate that our approach achieves significant improvements over existing unsupervised approaches, and is even comparable in performance with supervised state-of-the-arts.
arXiv Detail & Related papers (2021-09-25T23:31:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.