Classification with Rejection Based on Cost-sensitive Classification
- URL: http://arxiv.org/abs/2010.11748v5
- Date: Wed, 29 Sep 2021 13:29:45 GMT
- Title: Classification with Rejection Based on Cost-sensitive Classification
- Authors: Nontawat Charoenphakdee, Zhenghang Cui, Yivan Zhang, Masashi Sugiyama
- Abstract summary: We propose a novel method of classification with rejection by ensemble of learning.
Experimental results demonstrate the usefulness of our proposed approach in clean, noisy, and positive-unlabeled classification.
- Score: 83.50402803131412
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of classification with rejection is to avoid risky misclassification
in error-critical applications such as medical diagnosis and product
inspection. In this paper, based on the relationship between classification
with rejection and cost-sensitive classification, we propose a novel method of
classification with rejection by learning an ensemble of cost-sensitive
classifiers, which satisfies all the following properties: (i) it can avoid
estimating class-posterior probabilities, resulting in improved classification
accuracy, (ii) it allows a flexible choice of losses including non-convex ones,
(iii) it does not require complicated modifications when using different
losses, (iv) it is applicable to both binary and multiclass cases, and (v) it
is theoretically justifiable for any classification-calibrated loss.
Experimental results demonstrate the usefulness of our proposed approach in
clean-labeled, noisy-labeled, and positive-unlabeled classification.
Related papers
- Conjunction Subspaces Test for Conformal and Selective Classification [1.8059823719166437]
We present a new classifier, which integrates significance testing results over different random subspaces to yield consensus p-values.
The proposed classifier can be easily deployed for the purpose of conformal prediction and selective classification with reject and refine options.
arXiv Detail & Related papers (2024-10-16T06:56:53Z) - A Universal Unbiased Method for Classification from Aggregate
Observations [115.20235020903992]
This paper presents a novel universal method of CFAO, which holds an unbiased estimator of the classification risk for arbitrary losses.
Our proposed method not only guarantees the risk consistency due to the unbiased risk estimator but also can be compatible with arbitrary losses.
arXiv Detail & Related papers (2023-06-20T07:22:01Z) - Parametric Classification for Generalized Category Discovery: A Baseline
Study [70.73212959385387]
Generalized Category Discovery (GCD) aims to discover novel categories in unlabelled datasets using knowledge learned from labelled samples.
We investigate the failure of parametric classifiers, verify the effectiveness of previous design choices when high-quality supervision is available, and identify unreliable pseudo-labels as a key problem.
We propose a simple yet effective parametric classification method that benefits from entropy regularisation, achieves state-of-the-art performance on multiple GCD benchmarks and shows strong robustness to unknown class numbers.
arXiv Detail & Related papers (2022-11-21T18:47:11Z) - The Impact of Using Regression Models to Build Defect Classifiers [13.840006058766766]
It is common practice to discretize continuous defect counts into defective and non-defective classes.
We compare the performance and interpretation of defect classifiers built using both approaches.
arXiv Detail & Related papers (2022-02-12T22:12:55Z) - When in Doubt: Improving Classification Performance with Alternating
Normalization [57.39356691967766]
We introduce Classification with Alternating Normalization (CAN), a non-parametric post-processing step for classification.
CAN improves classification accuracy for challenging examples by re-adjusting their predicted class probability distribution.
We empirically demonstrate its effectiveness across a diverse set of classification tasks.
arXiv Detail & Related papers (2021-09-28T02:55:42Z) - Constrained Classification and Policy Learning [0.0]
We study consistency of surrogate loss procedures under a constrained set of classifiers.
We show that hinge losses are the only surrogate losses that preserve consistency in second-best scenarios.
arXiv Detail & Related papers (2021-06-24T10:43:00Z) - On Focal Loss for Class-Posterior Probability Estimation: A Theoretical
Perspective [83.19406301934245]
We first prove that the focal loss is classification-calibrated, i.e., its minimizer surely yields the Bayes-optimal classifier.
We then prove that the focal loss is not strictly proper, i.e., the confidence score of the classifier does not match the true class-posterior probability.
Our proposed transformation significantly improves the accuracy of class-posterior probability estimation.
arXiv Detail & Related papers (2020-11-18T09:36:52Z) - Learning Gradient Boosted Multi-label Classification Rules [4.842945656927122]
We propose an algorithm for learning multi-label classification rules that is able to minimize decomposable as well as non-decomposable loss functions.
We analyze the abilities and limitations of our approach on synthetic data and evaluate its predictive performance on multi-label benchmarks.
arXiv Detail & Related papers (2020-06-23T21:39:23Z) - Angle-Based Cost-Sensitive Multicategory Classification [34.174072286426885]
We propose a novel angle-based cost-sensitive classification framework for multicategory classification without the sum-to-zero constraint.
To show the usefulness of the framework, two cost-sensitive multicategory boosting algorithms are derived as concrete instances.
arXiv Detail & Related papers (2020-03-08T00:42:15Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.