A Practical Framework for Relation Extraction with Noisy Labels Based on
Doubly Transitional Loss
- URL: http://arxiv.org/abs/2004.13786v1
- Date: Tue, 28 Apr 2020 19:38:20 GMT
- Title: A Practical Framework for Relation Extraction with Noisy Labels Based on
Doubly Transitional Loss
- Authors: Shanchan Wu and Kai Fan
- Abstract summary: We introduce a practical end-to-end deep learning framework for automatic labeling.
One transition is parameterized by a non-linear transformation between hidden layers.
Another is an explicit probability transition matrix that captures the direct conversion between labels.
- Score: 14.121872633596452
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Either human annotation or rule based automatic labeling is an effective
method to augment data for relation extraction. However, the inevitable wrong
labeling problem for example by distant supervision may deteriorate the
performance of many existing methods. To address this issue, we introduce a
practical end-to-end deep learning framework, including a standard feature
extractor and a novel noisy classifier with our proposed doubly transitional
mechanism. One transition is basically parameterized by a non-linear
transformation between hidden layers that implicitly represents the conversion
between the true and noisy labels, and it can be readily optimized together
with other model parameters. Another is an explicit probability transition
matrix that captures the direct conversion between labels but needs to be
derived from an EM algorithm. We conduct experiments on the NYT dataset and
SemEval 2018 Task 7. The empirical results show comparable or better
performance over state-of-the-art methods.
Related papers
- On Robust Learning from Noisy Labels: A Permutation Layer Approach [53.798757734297986]
This paper introduces a permutation layer learning approach termed PermLL to dynamically calibrate the training process of a deep neural network (DNN)
We provide two variants of PermLL in this paper: one applies the permutation layer to the model's prediction, while the other applies it directly to the given noisy label.
We validate PermLL experimentally and show that it achieves state-of-the-art performance on both real and synthetic datasets.
arXiv Detail & Related papers (2022-11-29T03:01:48Z) - Instance-Dependent Label-Noise Learning with Manifold-Regularized
Transition Matrix Estimation [172.81824511381984]
The transition matrix T(x) is unidentifiable under the instance-dependent noise(IDN)
We propose assumption on the geometry of T(x) that "the closer two instances are, the more similar their corresponding transition matrices should be"
Our method is superior to state-of-the-art approaches for label-noise learning under the challenging IDN.
arXiv Detail & Related papers (2022-06-06T04:12:01Z) - Learning Noise Transition Matrix from Only Noisy Labels via Total
Variation Regularization [88.91872713134342]
We propose a theoretically grounded method that can estimate the noise transition matrix and learn a classifier simultaneously.
We show the effectiveness of the proposed method through experiments on benchmark and real-world datasets.
arXiv Detail & Related papers (2021-02-04T05:09:18Z) - Extended T: Learning with Mixed Closed-set and Open-set Noisy Labels [86.5943044285146]
The label noise transition matrix $T$ reflects the probabilities that true labels flip into noisy ones.
In this paper, we focus on learning under the mixed closed-set and open-set label noise.
Our method can better model the mixed label noise, following its more robust performance than the prior state-of-the-art label-noise learning methods.
arXiv Detail & Related papers (2020-12-02T02:42:45Z) - Dual T: Reducing Estimation Error for Transition Matrix in Label-noise
Learning [157.2709657207203]
Existing methods for estimating the transition matrix rely heavily on estimating the noisy class posterior.
We introduce an intermediate class to avoid directly estimating the noisy class posterior.
By this intermediate class, the original transition matrix can then be factorized into the product of two easy-to-estimate transition matrices.
arXiv Detail & Related papers (2020-06-14T05:48:20Z) - Meta Transition Adaptation for Robust Deep Learning with Noisy Labels [61.8970957519509]
This study proposes a new meta-transition-learning strategy for the task.
Specifically, through the sound guidance of a small set of meta data with clean labels, the noise transition matrix and the classifier parameters can be mutually ameliorated.
Our method can more accurately extract the transition matrix, naturally following its more robust performance than prior arts.
arXiv Detail & Related papers (2020-06-10T07:27:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.