SENT: Sentence-level Distant Relation Extraction via Negative Training
- URL: http://arxiv.org/abs/2106.11566v1
- Date: Tue, 22 Jun 2021 06:49:05 GMT
- Title: SENT: Sentence-level Distant Relation Extraction via Negative Training
- Authors: Ruotian Ma, Tao Gui, Linyang Li, Qi Zhang, Yaqian Zhou and Xuanjing
Huang
- Abstract summary: Using bag labels for sentence-level training will introduce much noise, thus severely degrading performance.
We propose the use of negative training (NT) in which a model is trained using complementary labels regarding that the instance does not belong to these complementary labels"
Based on NT, we propose a sentence-level framework, SENT, for distant relation extraction.
- Score: 45.98674099149065
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Distant supervision for relation extraction provides uniform bag labels for
each sentence inside the bag, while accurate sentence labels are important for
downstream applications that need the exact relation type. Directly using bag
labels for sentence-level training will introduce much noise, thus severely
degrading performance. In this work, we propose the use of negative training
(NT), in which a model is trained using complementary labels regarding that
``the instance does not belong to these complementary labels". Since the
probability of selecting a true label as a complementary label is low, NT
provides less noisy information. Furthermore, the model trained with NT is able
to separate the noisy data from the training data. Based on NT, we propose a
sentence-level framework, SENT, for distant relation extraction. SENT not only
filters the noisy data to construct a cleaner dataset, but also performs a
re-labeling process to transform the noisy data into useful training data, thus
further benefiting the model's performance. Experimental results show the
significant improvement of the proposed method over previous methods on
sentence-level evaluation and de-noise effect.
Related papers
- Extracting Clean and Balanced Subset for Noisy Long-tailed Classification [66.47809135771698]
We develop a novel pseudo labeling method using class prototypes from the perspective of distribution matching.
By setting a manually-specific probability measure, we can reduce the side-effects of noisy and long-tailed data simultaneously.
Our method can extract this class-balanced subset with clean labels, which brings effective performance gains for long-tailed classification with label noise.
arXiv Detail & Related papers (2024-04-10T07:34:37Z) - Adaptive Integration of Partial Label Learning and Negative Learning for
Enhanced Noisy Label Learning [23.847160480176697]
We propose a simple yet powerful idea called textbfNPN, which revolutionizes textbfNoisy label learning.
We generate reliable complementary labels using all non-candidate labels for NL to enhance model robustness through indirect supervision.
Experiments conducted on both synthetically corrupted and real-world noisy datasets demonstrate the superiority of NPN compared to other state-of-the-art (SOTA) methods.
arXiv Detail & Related papers (2023-12-15T03:06:19Z) - Soft Curriculum for Learning Conditional GANs with Noisy-Labeled and
Uncurated Unlabeled Data [70.25049762295193]
We introduce a novel conditional image generation framework that accepts noisy-labeled and uncurated data during training.
We propose soft curriculum learning, which assigns instance-wise weights for adversarial training while assigning new labels for unlabeled data.
Our experiments show that our approach outperforms existing semi-supervised and label-noise robust methods in terms of both quantitative and qualitative performance.
arXiv Detail & Related papers (2023-07-17T08:31:59Z) - Rank-Aware Negative Training for Semi-Supervised Text Classification [3.105629960108712]
Semi-supervised text classification-based paradigms (SSTC) typically employ the spirit of self-training.
This paper presents a Rank-aware Negative Training (RNT) framework to address SSTC in learning with noisy label manner.
arXiv Detail & Related papers (2023-06-13T08:41:36Z) - Dist-PU: Positive-Unlabeled Learning from a Label Distribution
Perspective [89.5370481649529]
We propose a label distribution perspective for PU learning in this paper.
Motivated by this, we propose to pursue the label distribution consistency between predicted and ground-truth label distributions.
Experiments on three benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-12-06T07:38:29Z) - Context-based Virtual Adversarial Training for Text Classification with
Noisy Labels [1.9508698179748525]
We propose context-based virtual adversarial training (ConVAT) to prevent a text classifier from overfitting to noisy labels.
Unlike the previous works, the proposed method performs the adversarial training at the context level rather than the inputs.
We conduct extensive experiments on four text classification datasets with two types of label noises.
arXiv Detail & Related papers (2022-05-29T14:19:49Z) - A Novel Perspective for Positive-Unlabeled Learning via Noisy Labels [49.990938653249415]
This research presents a methodology that assigns initial pseudo-labels to unlabeled data which is used as noisy-labeled data, and trains a deep neural network using the noisy-labeled data.
Experimental results demonstrate that the proposed method significantly outperforms the state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-03-08T11:46:02Z) - Learning with Out-of-Distribution Data for Audio Classification [60.48251022280506]
We show that detecting and relabelling certain OOD instances, rather than discarding them, can have a positive effect on learning.
The proposed method is shown to improve the performance of convolutional neural networks by a significant margin.
arXiv Detail & Related papers (2020-02-11T21:08:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.