Gradient Imitation Reinforcement Learning for Low Resource Relation
Extraction
- URL: http://arxiv.org/abs/2109.06415v1
- Date: Tue, 14 Sep 2021 03:51:15 GMT
- Title: Gradient Imitation Reinforcement Learning for Low Resource Relation
Extraction
- Authors: Xuming Hu, Chenwei Zhang, Yawen Yang, Xiaohe Li, Li Lin, Lijie Wen,
Philip S. Yu
- Abstract summary: Low-resource relation Extraction (LRE) aims to extract relation facts from limited labeled corpora when human annotation is scarce.
We develop a Gradient Imitation Reinforcement Learning method to encourage pseudo label data to imitate the gradient descent direction on labeled data.
We also propose a framework called GradLRE, which handles two major scenarios in low-resource relation extraction.
- Score: 52.63803634033647
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-resource Relation Extraction (LRE) aims to extract relation facts from
limited labeled corpora when human annotation is scarce. Existing works either
utilize self-training scheme to generate pseudo labels that will cause the
gradual drift problem, or leverage meta-learning scheme which does not solicit
feedback explicitly. To alleviate selection bias due to the lack of feedback
loops in existing LRE learning paradigms, we developed a Gradient Imitation
Reinforcement Learning method to encourage pseudo label data to imitate the
gradient descent direction on labeled data and bootstrap its optimization
capability through trial and error. We also propose a framework called GradLRE,
which handles two major scenarios in low-resource relation extraction. Besides
the scenario where unlabeled data is sufficient, GradLRE handles the situation
where no unlabeled data is available, by exploiting a contextualized
augmentation method to generate data. Experimental results on two public
datasets demonstrate the effectiveness of GradLRE on low resource relation
extraction when comparing with baselines.
Related papers
- XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners [71.8257151788923]
We propose a novel Explainable Active Learning framework (XAL) for low-resource text classification.
XAL encourages classifiers to justify their inferences and delve into unlabeled data for which they cannot provide reasonable explanations.
Experiments on six datasets show that XAL achieves consistent improvement over 9 strong baselines.
arXiv Detail & Related papers (2023-10-09T08:07:04Z) - Class-Adaptive Self-Training for Relation Extraction with Incompletely
Annotated Training Data [43.46328487543664]
Relation extraction (RE) aims to extract relations from sentences and documents.
Recent studies showed that many RE datasets are incompletely annotated.
This is known as the false negative problem in which valid relations are falsely annotated as 'no_relation'
arXiv Detail & Related papers (2023-06-16T09:01:45Z) - Continual Contrastive Finetuning Improves Low-Resource Relation
Extraction [34.76128090845668]
Relation extraction has been particularly challenging in low-resource scenarios and domains.
Recent literature has tackled low-resource RE by self-supervised learning.
We propose to pretrain and finetune the RE model using consistent objectives of contrastive learning.
arXiv Detail & Related papers (2022-12-21T07:30:22Z) - Gradient Imitation Reinforcement Learning for General Low-Resource
Information Extraction [80.64518530825801]
We develop a Gradient Reinforcement Learning (GIRL) method to encourage pseudo-labeled data to imitate the gradient descent direction on labeled data.
We also leverage GIRL to solve all IE sub-tasks (named entity recognition, relation extraction, and event extraction) in low-resource settings.
arXiv Detail & Related papers (2022-11-11T05:37:19Z) - L2B: Learning to Bootstrap Robust Models for Combating Label Noise [52.02335367411447]
This paper introduces a simple and effective method, named Learning to Bootstrap (L2B)
It enables models to bootstrap themselves using their own predictions without being adversely affected by erroneous pseudo-labels.
It achieves this by dynamically adjusting the importance weight between real observed and generated labels, as well as between different samples through meta-learning.
arXiv Detail & Related papers (2022-02-09T05:57:08Z) - MapRE: An Effective Semantic Mapping Approach for Low-resource Relation
Extraction [11.821464352959454]
We propose a framework considering both label-agnostic and label-aware semantic mapping information for low resource relation extraction.
We show that incorporating the above two types of mapping information in both pretraining and fine-tuning can significantly improve the model performance.
arXiv Detail & Related papers (2021-09-09T09:02:23Z) - Semi-supervised Relation Extraction via Incremental Meta Self-Training [56.633441255756075]
Semi-Supervised Relation Extraction methods aim to leverage unlabeled data in addition to learning from limited samples.
Existing self-training methods suffer from the gradual drift problem, where noisy pseudo labels on unlabeled data are incorporated during training.
We propose a method called MetaSRE, where a Relation Label Generation Network generates quality assessment on pseudo labels by (meta) learning from the successful and failed attempts on Relation Classification Network as an additional meta-objective.
arXiv Detail & Related papers (2020-10-06T03:54:11Z) - Unbiased Risk Estimators Can Mislead: A Case Study of Learning with
Complementary Labels [92.98756432746482]
We study a weakly supervised problem called learning with complementary labels.
We show that the quality of gradient estimation matters more in risk minimization.
We propose a novel surrogate complementary loss(SCL) framework that trades zero bias with reduced variance.
arXiv Detail & Related papers (2020-07-05T04:19:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.