RH-Net: Improving Neural Relation Extraction via Reinforcement Learning
and Hierarchical Relational Searching
- URL: http://arxiv.org/abs/2010.14255v2
- Date: Tue, 2 Feb 2021 07:24:01 GMT
- Title: RH-Net: Improving Neural Relation Extraction via Reinforcement Learning
and Hierarchical Relational Searching
- Authors: Jianing Wang
- Abstract summary: We propose a novel framework named RH-Net, which utilizes Reinforcement learning and Hierarchical relational searching module to improve relation extraction.
We then propose the hierarchical relational searching module to share the semantics from correlative instances between data-rich and data-poor classes.
- Score: 2.1828601975620257
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Distant supervision (DS) aims to generate large-scale heuristic labeling
corpus, which is widely used for neural relation extraction currently. However,
it heavily suffers from noisy labeling and long-tail distributions problem.
Many advanced approaches usually separately address two problems, which ignore
their mutual interactions. In this paper, we propose a novel framework named
RH-Net, which utilizes Reinforcement learning and Hierarchical relational
searching module to improve relation extraction. We leverage reinforcement
learning to instruct the model to select high-quality instances. We then
propose the hierarchical relational searching module to share the semantics
from correlative instances between data-rich and data-poor classes. During the
iterative process, the two modules keep interacting to alleviate the noisy and
long-tail problem simultaneously. Extensive experiments on widely used NYT data
set clearly show that our method significant improvements over state-of-the-art
baselines.
Related papers
- Nearest Neighbor-Based Contrastive Learning for Hyperspectral and LiDAR
Data Classification [45.026868970899514]
We propose a Nearest Neighbor-based Contrastive Learning Network (NNCNet) to learn discriminative feature representations.
Specifically, we propose a nearest neighbor-based data augmentation scheme to use enhanced semantic relationships among nearby regions.
In addition, we design a bilinear attention module to exploit the second-order and even high-order feature interactions between the HSI and LiDAR data.
arXiv Detail & Related papers (2023-01-09T13:43:54Z) - Towards Realistic Low-resource Relation Extraction: A Benchmark with
Empirical Baseline Study [51.33182775762785]
This paper presents an empirical study to build relation extraction systems in low-resource settings.
We investigate three schemes to evaluate the performance in low-resource settings: (i) different types of prompt-based methods with few-shot labeled data; (ii) diverse balancing methods to address the long-tailed distribution issue; and (iii) data augmentation technologies and self-training to generate more labeled in-domain data.
arXiv Detail & Related papers (2022-10-19T15:46:37Z) - Improving Long Tailed Document-Level Relation Extraction via Easy
Relation Augmentation and Contrastive Learning [66.83982926437547]
We argue that mitigating the long-tailed distribution problem is crucial for DocRE in the real-world scenario.
Motivated by the long-tailed distribution problem, we propose an Easy Relation Augmentation(ERA) method for improving DocRE.
arXiv Detail & Related papers (2022-05-21T06:15:11Z) - HRKD: Hierarchical Relational Knowledge Distillation for Cross-domain
Language Model Compression [53.90578309960526]
Large pre-trained language models (PLMs) have shown overwhelming performances compared with traditional neural network methods.
We propose a hierarchical relational knowledge distillation (HRKD) method to capture both hierarchical and domain relational information.
arXiv Detail & Related papers (2021-10-16T11:23:02Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z) - MapRE: An Effective Semantic Mapping Approach for Low-resource Relation
Extraction [11.821464352959454]
We propose a framework considering both label-agnostic and label-aware semantic mapping information for low resource relation extraction.
We show that incorporating the above two types of mapping information in both pretraining and fine-tuning can significantly improve the model performance.
arXiv Detail & Related papers (2021-09-09T09:02:23Z) - Improving Sentence-Level Relation Extraction through Curriculum Learning [7.117139527865022]
We propose a curriculum learning-based relation extraction model that split data by difficulty and utilize it for learning.
In the experiments with the representative sentence-level relation extraction datasets, TACRED and Re-TACRED, the proposed method showed good performances.
arXiv Detail & Related papers (2021-07-20T08:44:40Z) - Distantly-Supervised Long-Tailed Relation Extraction Using Constraint
Graphs [16.671606030727975]
In this paper, we introduce constraint graphs to model the dependencies between relation labels.
We also propose a novel constraint graph-based relation extraction framework(CGRE) to handle the two challenges simultaneously.
CGRE employs graph convolution networks (GCNs) to propagate information from data-rich relation nodes to data-poor relation nodes.
arXiv Detail & Related papers (2021-05-24T12:02:32Z) - Improving Long-Tail Relation Extraction with Collaborating
Relation-Augmented Attention [63.26288066935098]
We propose a novel neural network, Collaborating Relation-augmented Attention (CoRA), to handle both the wrong labeling and long-tail relations.
In the experiments on the popular benchmark dataset NYT, the proposed CoRA improves the prior state-of-the-art performance by a large margin.
arXiv Detail & Related papers (2020-10-08T05:34:43Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.