Statistical Verification of Linear Classifiers
- URL: http://arxiv.org/abs/2501.14430v1
- Date: Fri, 24 Jan 2025 11:56:45 GMT
- Title: Statistical Verification of Linear Classifiers
- Authors: Anton Zhiyanov, Alexander Shklyaev, Alexey Galatenko, Vladimir Galatenko, Alexander Tonevitsky,
- Abstract summary: We propose a homogeneity test closely related to the concept of linear separability between two samples.
We focus on establishing upper bounds for the test's emphp-value when applied to two-dimensional samples.
- Score: 76.95660509846216
- License:
- Abstract: We propose a homogeneity test closely related to the concept of linear separability between two samples. Using the test one can answer the question whether a linear classifier is merely ``random'' or effectively captures differences between two classes. We focus on establishing upper bounds for the test's \emph{p}-value when applied to two-dimensional samples. Specifically, for normally distributed samples we experimentally demonstrate that the upper bound is highly accurate. Using this bound, we evaluate classifiers designed to detect ER-positive breast cancer recurrence based on gene pair expression. Our findings confirm significance of IGFBP6 and ELOVL5 genes in this process.
Related papers
- Analysis of Diagnostics (Part II): Prevalence, Linear Independence, and Unsupervised Learning [0.0]
Part I considered the context of supervised machine learning (ML)
Part II considers the extent to which these results can be extended to tasks in unsupervised learning.
arXiv Detail & Related papers (2024-08-28T13:39:57Z) - Transductive conformal inference with adaptive scores [3.591224588041813]
We consider the transductive setting, where decisions are made on a test sample of $m$ new points.
We show that their joint distribution follows a P'olya urn model, and establish a concentration inequality for their empirical distribution function.
We demonstrate the usefulness of these theoretical results through uniform, in-probability guarantees for two machine learning tasks.
arXiv Detail & Related papers (2023-10-27T12:48:30Z) - A framework for paired-sample hypothesis testing for high-dimensional
data [7.400168551191579]
We put forward the idea that scoring functions can be produced by the decision rules defined by the bisecting hyperplanes of the line segments connecting each pair of instances.
First, we estimate the bisecting hyperplanes for each pair of instances and an aggregated rule derived through the Hodges-Lehmann estimator.
arXiv Detail & Related papers (2023-09-28T09:17:11Z) - Bootstrapped Edge Count Tests for Nonparametric Two-Sample Inference
Under Heterogeneity [5.8010446129208155]
We develop a new nonparametric testing procedure that accurately detects differences between the two samples.
A comprehensive simulation study and an application to detecting user behaviors in online games demonstrates the excellent non-asymptotic performance of the proposed test.
arXiv Detail & Related papers (2023-04-26T22:25:44Z) - Learning disentangled representations for explainable chest X-ray
classification using Dirichlet VAEs [68.73427163074015]
This study explores the use of the Dirichlet Variational Autoencoder (DirVAE) for learning disentangled latent representations of chest X-ray (CXR) images.
The predictive capacity of multi-modal latent representations learned by DirVAE models is investigated through implementation of an auxiliary multi-label classification task.
arXiv Detail & Related papers (2023-02-06T18:10:08Z) - Visualizing Classifier Adjacency Relations: A Case Study in Speaker
Verification and Voice Anti-Spoofing [72.4445825335561]
We propose a simple method to derive 2D representation from detection scores produced by an arbitrary set of binary classifiers.
Based upon rank correlations, our method facilitates a visual comparison of classifiers with arbitrary scores.
While the approach is fully versatile and can be applied to any detection task, we demonstrate the method using scores produced by automatic speaker verification and voice anti-spoofing systems.
arXiv Detail & Related papers (2021-06-11T13:03:33Z) - Understanding Classifier Mistakes with Generative Models [88.20470690631372]
Deep neural networks are effective on supervised learning tasks, but have been shown to be brittle.
In this paper, we leverage generative models to identify and characterize instances where classifiers fail to generalize.
Our approach is agnostic to class labels from the training set which makes it applicable to models trained in a semi-supervised way.
arXiv Detail & Related papers (2020-10-05T22:13:21Z) - Predictive Value Generalization Bounds [27.434419027831044]
We study a bi-criterion framework for assessing scoring functions in the context of binary classification.
We study properties of scoring functions with respect to predictive values by deriving new distribution-free large deviation and uniform convergence bounds.
arXiv Detail & Related papers (2020-07-09T21:23:28Z) - Calibration of Neural Networks using Splines [51.42640515410253]
Measuring calibration error amounts to comparing two empirical distributions.
We introduce a binning-free calibration measure inspired by the classical Kolmogorov-Smirnov (KS) statistical test.
Our method consistently outperforms existing methods on KS error as well as other commonly used calibration measures.
arXiv Detail & Related papers (2020-06-23T07:18:05Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.