Pearls from Pebbles: Improved Confidence Functions for Auto-labeling
- URL: http://arxiv.org/abs/2404.16188v1
- Date: Wed, 24 Apr 2024 20:22:48 GMT
- Title: Pearls from Pebbles: Improved Confidence Functions for Auto-labeling
- Authors: Harit Vishwakarma, Reid, Chen, Sui Jiet Tay, Satya Sai Srinath Namburi, Frederic Sala, Ramya Korlakai Vinayak,
- Abstract summary: threshold-based auto-labeling (TBAL) works by finding a threshold on a model's confidence scores above which it can accurately label unlabeled data points.
We propose a framework for studying the emphoptimal TBAL confidence function.
We develop a new post-hoc method specifically designed to maximize performance in TBAL systems.
- Score: 51.44986105969375
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Auto-labeling is an important family of techniques that produce labeled training sets with minimum manual labeling. A prominent variant, threshold-based auto-labeling (TBAL), works by finding a threshold on a model's confidence scores above which it can accurately label unlabeled data points. However, many models are known to produce overconfident scores, leading to poor TBAL performance. While a natural idea is to apply off-the-shelf calibration methods to alleviate the overconfidence issue, such methods still fall short. Rather than experimenting with ad-hoc choices of confidence functions, we propose a framework for studying the \emph{optimal} TBAL confidence function. We develop a tractable version of the framework to obtain \texttt{Colander} (Confidence functions for Efficient and Reliable Auto-labeling), a new post-hoc method specifically designed to maximize performance in TBAL systems. We perform an extensive empirical evaluation of our method \texttt{Colander} and compare it against methods designed for calibration. \texttt{Colander} achieves up to 60\% improvements on coverage over the baselines while maintaining auto-labeling error below $5\%$ and using the same amount of labeled data as the baselines.
Related papers
- Show Your Work with Confidence: Confidence Bands for Tuning Curves [51.12106543561089]
tuning curves plot validation performance as a function of tuning effort.
We present the first method to construct valid confidence bands for tuning curves.
We validate our design with ablations, analyze the effect of sample size, and provide guidance on comparing models with our method.
arXiv Detail & Related papers (2023-11-16T00:50:37Z) - Binary Classification with Confidence Difference [100.08818204756093]
This paper delves into a novel weakly supervised binary classification problem called confidence-difference (ConfDiff) classification.
We propose a risk-consistent approach to tackle this problem and show that the estimation error bound the optimal convergence rate.
We also introduce a risk correction approach to mitigate overfitting problems, whose consistency and convergence rate are also proven.
arXiv Detail & Related papers (2023-10-09T11:44:50Z) - Boosting Semi-Supervised Learning by bridging high and low-confidence
predictions [4.18804572788063]
Pseudo-labeling is a crucial technique in semi-supervised learning (SSL)
We propose a new method called ReFixMatch, which aims to utilize all of the unlabeled data during training.
arXiv Detail & Related papers (2023-08-15T00:27:18Z) - Alternative Pseudo-Labeling for Semi-Supervised Automatic Speech
Recognition [49.42732949233184]
When labeled data is insufficient, semi-supervised learning with the pseudo-labeling technique can significantly improve the performance of automatic speech recognition.
Taking noisy labels as ground-truth in the loss function results in suboptimal performance.
We propose a novel framework named alternative pseudo-labeling to tackle the issue of noisy pseudo-labels.
arXiv Detail & Related papers (2023-08-12T12:13:52Z) - Confidence Estimation Using Unlabeled Data [12.512654188295764]
We propose the first confidence estimation method for a semi-supervised setting, when most training labels are unavailable.
We use training consistency as a surrogate function and propose a consistency ranking loss for confidence estimation.
On both image classification and segmentation tasks, our method achieves state-of-the-art performances in confidence estimation.
arXiv Detail & Related papers (2023-07-19T20:11:30Z) - SoftMatch: Addressing the Quantity-Quality Trade-off in Semi-supervised
Learning [101.86916775218403]
This paper revisits the popular pseudo-labeling methods via a unified sample weighting formulation.
We propose SoftMatch to overcome the trade-off by maintaining both high quantity and high quality of pseudo-labels during training.
In experiments, SoftMatch shows substantial improvements across a wide variety of benchmarks, including image, text, and imbalanced classification.
arXiv Detail & Related papers (2023-01-26T03:53:25Z) - How Does Beam Search improve Span-Level Confidence Estimation in
Generative Sequence Labeling? [11.481435098152893]
This paper aims to provide some empirical insights on estimating model confidence for generative sequence labeling.
As verified over six public datasets, we show that our proposed approach significantly reduces calibration errors of the predictions of a generative sequence labeling model.
arXiv Detail & Related papers (2022-12-21T05:01:01Z) - FreeMatch: Self-adaptive Thresholding for Semi-supervised Learning [46.95063831057502]
We propose emphFreeMatch to define and adjust the confidence threshold in a self-adaptive manner according to the model's learning status.
FreeMatch achieves textbf5.78%, textbf13.59%, and textbf1.28% error rate reduction over the latest state-of-the-art method FlexMatch on CIFAR-10 with 1 label per class.
arXiv Detail & Related papers (2022-05-15T10:07:52Z) - Uncertainty-aware Self-training for Text Classification with Few Labels [54.13279574908808]
We study self-training as one of the earliest semi-supervised learning approaches to reduce the annotation bottleneck.
We propose an approach to improve self-training by incorporating uncertainty estimates of the underlying neural network.
We show our methods leveraging only 20-30 labeled samples per class for each task for training and for validation can perform within 3% of fully supervised pre-trained language models.
arXiv Detail & Related papers (2020-06-27T08:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.