Improving Distantly Supervised Relation Extraction by Natural Language
Inference
- URL: http://arxiv.org/abs/2208.00346v1
- Date: Sun, 31 Jul 2022 02:48:34 GMT
- Title: Improving Distantly Supervised Relation Extraction by Natural Language
Inference
- Authors: Kang Zhou, Qiao Qiao, Yuepei Li, Qi Li
- Abstract summary: We propose a novel DSRE-NLI framework, which considers both distant supervision from existing knowledge bases and indirect supervision from pretrained language models for other tasks.
DSRE-NLI energizes an off-the-shelf natural language inference (NLI) engine with a semi-automatic relation verbalization (SARV) mechanism to provide indirect supervision.
With two simple and effective data consolidation strategies, the quality of training data is substantially improved.
- Score: 9.181270251524866
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: To reduce human annotations for relation extraction (RE) tasks, distantly
supervised approaches have been proposed, while struggling with low
performance. In this work, we propose a novel DSRE-NLI framework, which
considers both distant supervision from existing knowledge bases and indirect
supervision from pretrained language models for other tasks. DSRE-NLI energizes
an off-the-shelf natural language inference (NLI) engine with a semi-automatic
relation verbalization (SARV) mechanism to provide indirect supervision and
further consolidates the distant annotations to benefit multi-classification RE
models. The NLI-based indirect supervision acquires only one relation
verbalization template from humans as a semantically general template for each
relationship, and then the template set is enriched by high-quality textual
patterns automatically mined from the distantly annotated corpus. With two
simple and effective data consolidation strategies, the quality of training
data is substantially improved. Extensive experiments demonstrate that the
proposed framework significantly improves the SOTA performance (up to 7.73\% of
F1) on distantly supervised RE benchmark datasets.
Related papers
- Grasping the Essentials: Tailoring Large Language Models for Zero-Shot Relation Extraction [33.528688487954454]
Relation extraction (RE) aims to identify semantic relationships between entities within text.
Few-shot learning, aiming to lessen annotation demands, typically provides incomplete and biased supervision for target relations.
We introduce REPaL, comprising three stages: (1) We leverage large language models (LLMs) to generate initial seed instances from relation definitions and an unlabeled corpus.
arXiv Detail & Related papers (2024-02-17T00:20:06Z) - Prompt-based Logical Semantics Enhancement for Implicit Discourse
Relation Recognition [4.7938839332508945]
We propose a Prompt-based Logical Semantics Enhancement (PLSE) method for Implicit Discourse Relation Recognition (IDRR)
Our method seamlessly injects knowledge relevant to discourse relation into pre-trained language models through prompt-based connective prediction.
Experimental results on PDTB 2.0 and CoNLL16 datasets demonstrate that our method achieves outstanding and consistent performance against the current state-of-the-art models.
arXiv Detail & Related papers (2023-11-01T08:38:08Z) - Continual Contrastive Finetuning Improves Low-Resource Relation
Extraction [34.76128090845668]
Relation extraction has been particularly challenging in low-resource scenarios and domains.
Recent literature has tackled low-resource RE by self-supervised learning.
We propose to pretrain and finetune the RE model using consistent objectives of contrastive learning.
arXiv Detail & Related papers (2022-12-21T07:30:22Z) - HiURE: Hierarchical Exemplar Contrastive Learning for Unsupervised
Relation Extraction [60.80849503639896]
Unsupervised relation extraction aims to extract the relationship between entities from natural language sentences without prior information on relational scope or distribution.
We propose a novel contrastive learning framework named HiURE, which has the capability to derive hierarchical signals from relational feature space using cross hierarchy attention.
Experimental results on two public datasets demonstrate the advanced effectiveness and robustness of HiURE on unsupervised relation extraction when compared with state-of-the-art models.
arXiv Detail & Related papers (2022-05-04T17:56:48Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z) - Incorporating Linguistic Knowledge for Abstractive Multi-document
Summarization [20.572283625521784]
We develop a neural network based abstractive multi-document summarization (MDS) model.
We process the dependency information into the linguistic-guided attention mechanism.
With the help of linguistic signals, sentence-level relations can be correctly captured.
arXiv Detail & Related papers (2021-09-23T08:13:35Z) - SDA: Improving Text Generation with Self Data Augmentation [88.24594090105899]
We propose to improve the standard maximum likelihood estimation (MLE) paradigm by incorporating a self-imitation-learning phase for automatic data augmentation.
Unlike most existing sentence-level augmentation strategies, our method is more general and could be easily adapted to any MLE-based training procedure.
arXiv Detail & Related papers (2021-01-02T01:15:57Z) - SLM: Learning a Discourse Language Representation with Sentence
Unshuffling [53.42814722621715]
We introduce Sentence-level Language Modeling, a new pre-training objective for learning a discourse language representation.
We show that this feature of our model improves the performance of the original BERT by large margins.
arXiv Detail & Related papers (2020-10-30T13:33:41Z) - SelfORE: Self-supervised Relational Feature Learning for Open Relation
Extraction [60.08464995629325]
Open-domain relation extraction is the task of extracting open-domain relation facts from natural language sentences.
We proposed a self-supervised framework named SelfORE, which exploits weak, self-supervised signals.
Experimental results on three datasets show the effectiveness and robustness of SelfORE.
arXiv Detail & Related papers (2020-04-06T07:23:17Z) - Joint Contextual Modeling for ASR Correction and Language Understanding [60.230013453699975]
We propose multi-task neural approaches to perform contextual language correction on ASR outputs jointly with language understanding (LU)
We show that the error rates of off the shelf ASR and following LU systems can be reduced significantly by 14% relative with joint models trained using small amounts of in-domain data.
arXiv Detail & Related papers (2020-01-28T22:09:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.