Distantly-Supervised Neural Relation Extraction with Side Information
using BERT
- URL: http://arxiv.org/abs/2004.14443v3
- Date: Thu, 10 Sep 2020 20:30:34 GMT
- Title: Distantly-Supervised Neural Relation Extraction with Side Information
using BERT
- Authors: Johny Moreira, Chaina Oliveira, David Mac\^edo, Cleber Zanchettin,
Luciano Barbosa
- Abstract summary: Relation extraction (RE) consists in categorizing the relationship between entities in a sentence.
One of the methods that adopt this strategy is the RESIDE model, which proposes a distantly-supervised neural relation extraction using side information from Knowledge Bases.
Considering that this method outperformed state-of-the-art baselines, in this paper, we propose a related approach to RESIDE also using additional side information, but simplifying the sentence encoding with BERT embeddings.
- Score: 2.0946724304757955
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Relation extraction (RE) consists in categorizing the relationship between
entities in a sentence. A recent paradigm to develop relation extractors is
Distant Supervision (DS), which allows the automatic creation of new datasets
by taking an alignment between a text corpus and a Knowledge Base (KB). KBs can
sometimes also provide additional information to the RE task. One of the
methods that adopt this strategy is the RESIDE model, which proposes a
distantly-supervised neural relation extraction using side information from
KBs. Considering that this method outperformed state-of-the-art baselines, in
this paper, we propose a related approach to RESIDE also using additional side
information, but simplifying the sentence encoding with BERT embeddings.
Through experiments, we show the effectiveness of the proposed method in Google
Distant Supervision and Riedel datasets concerning the BGWA and RESIDE baseline
methods. Although Area Under the Curve is decreased because of unbalanced
datasets, P@N results have shown that the use of BERT as sentence encoding
allows superior performance to baseline methods.
Related papers
- Towards Realistic Low-resource Relation Extraction: A Benchmark with
Empirical Baseline Study [51.33182775762785]
This paper presents an empirical study to build relation extraction systems in low-resource settings.
We investigate three schemes to evaluate the performance in low-resource settings: (i) different types of prompt-based methods with few-shot labeled data; (ii) diverse balancing methods to address the long-tailed distribution issue; and (iii) data augmentation technologies and self-training to generate more labeled in-domain data.
arXiv Detail & Related papers (2022-10-19T15:46:37Z) - Simple and Effective Relation-based Embedding Propagation for Knowledge
Representation Learning [15.881121633396832]
We propose the Relation-based Embedding Propagation (REP) method to adapt pretrained graph embeddings with context.
We show that REP brings about 10% relative improvement to triplet-based embedding methods on OGBL-WikiKG2.
It takes 5%-83% time to achieve comparable results as the state-of-the-art GC-OTE.
arXiv Detail & Related papers (2022-05-13T06:02:13Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z) - Gradient Imitation Reinforcement Learning for Low Resource Relation
Extraction [52.63803634033647]
Low-resource relation Extraction (LRE) aims to extract relation facts from limited labeled corpora when human annotation is scarce.
We develop a Gradient Imitation Reinforcement Learning method to encourage pseudo label data to imitate the gradient descent direction on labeled data.
We also propose a framework called GradLRE, which handles two major scenarios in low-resource relation extraction.
arXiv Detail & Related papers (2021-09-14T03:51:15Z) - D-REX: Dialogue Relation Extraction with Explanations [65.3862263565638]
This work focuses on extracting explanations that indicate that a relation exists while using only partially labeled data.
We propose our model-agnostic framework, D-REX, a policy-guided semi-supervised algorithm that explains and ranks relations.
We find that about 90% of the time, human annotators prefer D-REX's explanations over a strong BERT-based joint relation extraction and explanation model.
arXiv Detail & Related papers (2021-09-10T22:30:48Z) - Learning Bias-Invariant Representation by Cross-Sample Mutual
Information Minimization [77.8735802150511]
We propose a cross-sample adversarial debiasing (CSAD) method to remove the bias information misused by the target task.
The correlation measurement plays a critical role in adversarial debiasing and is conducted by a cross-sample neural mutual information estimator.
We conduct thorough experiments on publicly available datasets to validate the advantages of the proposed method over state-of-the-art approaches.
arXiv Detail & Related papers (2021-08-11T21:17:02Z) - Improving BERT Model Using Contrastive Learning for Biomedical Relation
Extraction [13.354066085659198]
Contrastive learning is not widely utilized in natural language processing due to the lack of a general method of data augmentation for text data.
In this work, we explore the method of employing contrastive learning to improve the text representation from the BERT model for relation extraction.
The experimental results on three relation extraction benchmark datasets demonstrate that our method can improve the BERT model representation and achieve state-of-the-art performance.
arXiv Detail & Related papers (2021-04-28T17:50:24Z) - Improving Distantly-Supervised Relation Extraction through BERT-based
Label & Instance Embeddings [2.88848244747161]
We propose REDSandT, a novel distantly-supervised transformer-based RE method.
We exploit BERT's pre-trained model, and the relationship between labels and entities, respectively.
Experiments in the NYT-10 dataset show that REDSandT captures a broader set of relations with higher confidence.
arXiv Detail & Related papers (2021-02-01T20:50:24Z) - Named Entity Recognition and Relation Extraction using Enhanced Table
Filling by Contextualized Representations [14.614028420899409]
The proposed method computes representations for entity mentions and long-range dependencies without complicated hand-crafted features or neural-network architectures.
We also adapt a tensor dot-product to predict relation labels all at once without resorting to history-based predictions or search strategies.
Despite its simplicity, the experimental results demonstrate that the proposed method outperforms the state-of-the-art methods on the CoNLL04 and ACE05 English datasets.
arXiv Detail & Related papers (2020-10-15T04:58:23Z) - Probabilistic Case-based Reasoning for Open-World Knowledge Graph
Completion [59.549664231655726]
A case-based reasoning (CBR) system solves a new problem by retrieving cases' that are similar to the given problem.
In this paper, we demonstrate that such a system is achievable for reasoning in knowledge-bases (KBs)
Our approach predicts attributes for an entity by gathering reasoning paths from similar entities in the KB.
arXiv Detail & Related papers (2020-10-07T17:48:12Z) - Hybrid Attention-Based Transformer Block Model for Distant Supervision
Relation Extraction [20.644215991166902]
We propose a new framework using hybrid attention-based Transformer block with multi-instance learning to perform the DSRE task.
The proposed approach can outperform the state-of-the-art algorithms on the evaluation dataset.
arXiv Detail & Related papers (2020-03-10T13:05:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.