Conjunction Subspaces Test for Conformal and Selective Classification
- URL: http://arxiv.org/abs/2410.12297v1
- Date: Wed, 16 Oct 2024 06:56:53 GMT
- Title: Conjunction Subspaces Test for Conformal and Selective Classification
- Authors: Zengyou He, Zerun Li, Junjie Dong, Xinying Liu, Mudi Jiang, Lianyu Hu,
- Abstract summary: We present a new classifier, which integrates significance testing results over different random subspaces to yield consensus p-values.
The proposed classifier can be easily deployed for the purpose of conformal prediction and selective classification with reject and refine options.
- Score: 1.8059823719166437
- License:
- Abstract: In this paper, we present a new classifier, which integrates significance testing results over different random subspaces to yield consensus p-values for quantifying the uncertainty of classification decision. The null hypothesis is that the test sample has no association with the target class on a randomly chosen subspace, and hence the classification problem can be formulated as a problem of testing for the conjunction of hypotheses. The proposed classifier can be easily deployed for the purpose of conformal prediction and selective classification with reject and refine options by simply thresholding the consensus p-values. The theoretical analysis on the generalization error bound of the proposed classifier is provided and empirical studies on real data sets are conducted as well to demonstrate its effectiveness.
Related papers
- Submodular Information Selection for Hypothesis Testing with Misclassification Penalties [3.3444620077119436]
We study the problem of selecting an optimal subset of information sources for a hypothesis testing/classification task.
We propose a misclassification penalty framework, which enables nonuniform treatment of different misclassification errors.
We prove that this metric is submodular and establish near-optimal guarantees for the greedy algorithms for both the information set selection problems.
arXiv Detail & Related papers (2024-05-17T17:31:02Z) - Distribution-free Conformal Prediction for Ordinal Classification [0.0]
Ordinal classification is common in real applications where the target variable has natural ordering among the class labels.
New conformal prediction methods are developed for constructing contiguous and non-contiguous prediction sets.
arXiv Detail & Related papers (2024-04-25T13:49:59Z) - Analysis of Diagnostics (Part I): Prevalence, Uncertainty Quantification, and Machine Learning [0.0]
This manuscript is the first in a two-part series that studies deeper connections between classification theory and prevalence.
We propose a numerical, homotopy algorithm that estimates the $Bstar (q)$ by minimizing a prevalence-weighted empirical error.
We validate our methods in the context of synthetic data and a research-use-only SARS-CoV-2 enzyme-linked immunosorbent (ELISA) assay.
arXiv Detail & Related papers (2023-08-30T13:26:49Z) - Class-Conditional Conformal Prediction with Many Classes [60.8189977620604]
We propose a method called clustered conformal prediction that clusters together classes having "similar" conformal scores.
We find that clustered conformal typically outperforms existing methods in terms of class-conditional coverage and set size metrics.
arXiv Detail & Related papers (2023-06-15T17:59:02Z) - Parametric Classification for Generalized Category Discovery: A Baseline
Study [70.73212959385387]
Generalized Category Discovery (GCD) aims to discover novel categories in unlabelled datasets using knowledge learned from labelled samples.
We investigate the failure of parametric classifiers, verify the effectiveness of previous design choices when high-quality supervision is available, and identify unreliable pseudo-labels as a key problem.
We propose a simple yet effective parametric classification method that benefits from entropy regularisation, achieves state-of-the-art performance on multiple GCD benchmarks and shows strong robustness to unknown class numbers.
arXiv Detail & Related papers (2022-11-21T18:47:11Z) - Self-Certifying Classification by Linearized Deep Assignment [65.0100925582087]
We propose a novel class of deep predictors for classifying metric data on graphs within PAC-Bayes risk certification paradigm.
Building on the recent PAC-Bayes literature and data-dependent priors, this approach enables learning posterior distributions on the hypothesis space.
arXiv Detail & Related papers (2022-01-26T19:59:14Z) - When in Doubt: Improving Classification Performance with Alternating
Normalization [57.39356691967766]
We introduce Classification with Alternating Normalization (CAN), a non-parametric post-processing step for classification.
CAN improves classification accuracy for challenging examples by re-adjusting their predicted class probability distribution.
We empirically demonstrate its effectiveness across a diverse set of classification tasks.
arXiv Detail & Related papers (2021-09-28T02:55:42Z) - Classification with Rejection Based on Cost-sensitive Classification [83.50402803131412]
We propose a novel method of classification with rejection by ensemble of learning.
Experimental results demonstrate the usefulness of our proposed approach in clean, noisy, and positive-unlabeled classification.
arXiv Detail & Related papers (2020-10-22T14:05:05Z) - Achieving Equalized Odds by Resampling Sensitive Attributes [13.114114427206678]
We present a flexible framework for learning predictive models that approximately satisfy the equalized odds notion of fairness.
This differentiable functional is used as a penalty driving the model parameters towards equalized odds.
We develop a formal hypothesis test to detect whether a prediction rule violates this property, the first such test in the literature.
arXiv Detail & Related papers (2020-06-08T00:18:34Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.