Uncertainty-Aware Bootstrap Learning for Joint Extraction on
Distantly-Supervised Data
- URL: http://arxiv.org/abs/2305.03827v2
- Date: Thu, 8 Jun 2023 22:22:24 GMT
- Title: Uncertainty-Aware Bootstrap Learning for Joint Extraction on
Distantly-Supervised Data
- Authors: Yufei Li, Xiao Yu, Yanchi Liu, Haifeng Chen, Cong Liu
- Abstract summary: bootstrap learning is motivated by the intuition that the higher uncertainty of an instance, the more likely the model confidence is inconsistent with the ground truths.
We first explore instance-level data uncertainty to create an initial high-confident examples.
During bootstrap learning, we propose self-ensembling as a regularizer to alleviate inter-model uncertainty produced by noisy labels.
- Score: 36.54640096189285
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Jointly extracting entity pairs and their relations is challenging when
working on distantly-supervised data with ambiguous or noisy labels. To
mitigate such impact, we propose uncertainty-aware bootstrap learning, which is
motivated by the intuition that the higher uncertainty of an instance, the more
likely the model confidence is inconsistent with the ground truths.
Specifically, we first explore instance-level data uncertainty to create an
initial high-confident examples. Such subset serves as filtering noisy
instances and facilitating the model to converge fast at the early stage.
During bootstrap learning, we propose self-ensembling as a regularizer to
alleviate inter-model uncertainty produced by noisy labels. We further define
probability variance of joint tagging probabilities to estimate inner-model
parametric uncertainty, which is used to select and build up new reliable
training instances for the next iteration. Experimental results on two large
datasets reveal that our approach outperforms existing strong baselines and
related methods.
Related papers
- Uncertainty-aware self-training with expectation maximization basis transformation [9.7527450662978]
We propose a new self-training framework to combine uncertainty information of both model and dataset.
Specifically, we propose to use Expectation-Maximization (EM) to smooth the labels and comprehensively estimate the uncertainty information.
arXiv Detail & Related papers (2024-05-02T11:01:31Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Binary Classification with Confidence Difference [100.08818204756093]
This paper delves into a novel weakly supervised binary classification problem called confidence-difference (ConfDiff) classification.
We propose a risk-consistent approach to tackle this problem and show that the estimation error bound the optimal convergence rate.
We also introduce a risk correction approach to mitigate overfitting problems, whose consistency and convergence rate are also proven.
arXiv Detail & Related papers (2023-10-09T11:44:50Z) - Late Stopping: Avoiding Confidently Learning from Mislabeled Examples [61.00103151680946]
We propose a new framework, Late Stopping, which leverages the intrinsic robust learning ability of DNNs through a prolonged training process.
We empirically observe that mislabeled and clean examples exhibit differences in the number of epochs required for them to be consistently and correctly classified.
Experimental results on benchmark-simulated and real-world noisy datasets demonstrate that the proposed method outperforms state-of-the-art counterparts.
arXiv Detail & Related papers (2023-08-26T12:43:25Z) - Overcoming Overconfidence for Active Learning [1.2776312584227847]
We present two novel methods to address the problem of overconfidence that arises in the active learning scenario.
The first is an augmentation strategy named Cross-Mix-and-Mix (CMaM), which aims to calibrate the model by expanding the limited training distribution.
The second is a selection strategy named Ranked Margin Sampling (RankedMS), which prevents choosing data that leads to overly confident predictions.
arXiv Detail & Related papers (2023-08-21T09:04:54Z) - Open Set Relation Extraction via Unknown-Aware Training [72.10462476890784]
We propose an unknown-aware training method, regularizing the model by dynamically synthesizing negative instances.
Inspired by text adversarial attacks, we adaptively apply small but critical perturbations to original training instances.
Experimental results show that this method achieves SOTA unknown relation detection without compromising the classification of known relations.
arXiv Detail & Related papers (2023-06-08T05:45:25Z) - Uncertainty in Contrastive Learning: On the Predictability of Downstream
Performance [7.411571833582691]
We study whether the uncertainty of such a representation can be quantified for a single datapoint in a meaningful way.
We show that this goal can be achieved by directly estimating the distribution of the training data in the embedding space.
arXiv Detail & Related papers (2022-07-19T15:44:59Z) - Uncertainty Estimation for Language Reward Models [5.33024001730262]
Language models can learn a range of capabilities from unsupervised training on text corpora.
It is often easier for humans to choose between options than to provide labeled data, and prior work has achieved state-of-the-art performance by training a reward model from such preference comparisons.
We seek to address these problems via uncertainty estimation, which can improve sample efficiency and robustness using active learning and risk-averse reinforcement learning.
arXiv Detail & Related papers (2022-03-14T20:13:21Z) - Learning from Similarity-Confidence Data [94.94650350944377]
We investigate a novel weakly supervised learning problem of learning from similarity-confidence (Sconf) data.
We propose an unbiased estimator of the classification risk that can be calculated from only Sconf data and show that the estimation error bound achieves the optimal convergence rate.
arXiv Detail & Related papers (2021-02-13T07:31:16Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.