Binary Classification from Positive Data with Skewed Confidence
- URL: http://arxiv.org/abs/2001.10642v1
- Date: Wed, 29 Jan 2020 00:04:36 GMT
- Title: Binary Classification from Positive Data with Skewed Confidence
- Authors: Kazuhiko Shinoda, Hirotaka Kaji, Masashi Sugiyama
- Abstract summary: Positive-confidence (Pconf) classification is a promising weakly-supervised learning method.
In practice, the confidence may be skewed by bias arising in an annotation process.
We introduce the parameterized model of the skewed confidence, and propose the method for selecting the hyper parameter.
- Score: 85.18941440826309
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Positive-confidence (Pconf) classification [Ishida et al., 2018] is a
promising weakly-supervised learning method which trains a binary classifier
only from positive data equipped with confidence. However, in practice, the
confidence may be skewed by bias arising in an annotation process. The Pconf
classifier cannot be properly learned with skewed confidence, and consequently,
the classification performance might be deteriorated. In this paper, we
introduce the parameterized model of the skewed confidence, and propose the
method for selecting the hyperparameter which cancels out the negative impact
of skewed confidence under the assumption that we have the misclassification
rate of positive samples as a prior knowledge. We demonstrate the effectiveness
of the proposed method through a synthetic experiment with simple linear models
and benchmark problems with neural network models. We also apply our method to
drivers' drowsiness prediction to show that it works well with a real-world
problem where confidence is obtained based on manual annotation.
Related papers
- Revisiting Confidence Estimation: Towards Reliable Failure Prediction [53.79160907725975]
We find a general, widely existing but actually-neglected phenomenon that most confidence estimation methods are harmful for detecting misclassification errors.
We propose to enlarge the confidence gap by finding flat minima, which yields state-of-the-art failure prediction performance.
arXiv Detail & Related papers (2024-03-05T11:44:14Z) - Binary Classification with Confidence Difference [100.08818204756093]
This paper delves into a novel weakly supervised binary classification problem called confidence-difference (ConfDiff) classification.
We propose a risk-consistent approach to tackle this problem and show that the estimation error bound the optimal convergence rate.
We also introduce a risk correction approach to mitigate overfitting problems, whose consistency and convergence rate are also proven.
arXiv Detail & Related papers (2023-10-09T11:44:50Z) - Confidence Estimation Using Unlabeled Data [12.512654188295764]
We propose the first confidence estimation method for a semi-supervised setting, when most training labels are unavailable.
We use training consistency as a surrogate function and propose a consistency ranking loss for confidence estimation.
On both image classification and segmentation tasks, our method achieves state-of-the-art performances in confidence estimation.
arXiv Detail & Related papers (2023-07-19T20:11:30Z) - Trust, but Verify: Using Self-Supervised Probing to Improve
Trustworthiness [29.320691367586004]
We introduce a new approach of self-supervised probing, which enables us to check and mitigate the overconfidence issue for a trained model.
We provide a simple yet effective framework, which can be flexibly applied to existing trustworthiness-related methods in a plug-and-play manner.
arXiv Detail & Related papers (2023-02-06T08:57:20Z) - Reliability-Aware Prediction via Uncertainty Learning for Person Image
Retrieval [51.83967175585896]
UAL aims at providing reliability-aware predictions by considering data uncertainty and model uncertainty simultaneously.
Data uncertainty captures the noise" inherent in the sample, while model uncertainty depicts the model's confidence in the sample's prediction.
arXiv Detail & Related papers (2022-10-24T17:53:20Z) - SmoothMix: Training Confidence-calibrated Smoothed Classifiers for
Certified Robustness [61.212486108346695]
We propose a training scheme, coined SmoothMix, to control the robustness of smoothed classifiers via self-mixup.
The proposed procedure effectively identifies over-confident, near off-class samples as a cause of limited robustness.
Our experimental results demonstrate that the proposed method can significantly improve the certified $ell$-robustness of smoothed classifiers.
arXiv Detail & Related papers (2021-11-17T18:20:59Z) - Learning from Similarity-Confidence Data [94.94650350944377]
We investigate a novel weakly supervised learning problem of learning from similarity-confidence (Sconf) data.
We propose an unbiased estimator of the classification risk that can be calculated from only Sconf data and show that the estimation error bound achieves the optimal convergence rate.
arXiv Detail & Related papers (2021-02-13T07:31:16Z) - Uncertainty-sensitive Activity Recognition: a Reliability Benchmark and
the CARING Models [37.60817779613977]
We present the first study of how welthe confidence values of modern action recognition architectures indeed reflect the probability of the correct outcome.
We introduce a new approach which learns to transform the model output into realistic confidence estimates through an additional calibration network.
arXiv Detail & Related papers (2021-01-02T15:41:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.