Classification at the Accuracy Limit -- Facing the Problem of Data
Ambiguity
- URL: http://arxiv.org/abs/2206.01922v1
- Date: Sat, 4 Jun 2022 07:00:32 GMT
- Title: Classification at the Accuracy Limit -- Facing the Problem of Data
Ambiguity
- Authors: Claus Metzner, Achim Schilling, Maximilian Traxdorf, Konstantin
Tziridis, Holger Schulze, Patrick Krauss
- Abstract summary: We show the theoretical limit for classification accuracy that arises from the overlap of data categories.
We compare emerging data embeddings produced by supervised and unsupervised training, using MNIST and human EEG recordings during sleep.
This suggests that human-defined categories, such as hand-written digits or sleep stages, can indeed be considered as 'natural kinds'
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Data classification, the process of analyzing data and organizing it into
categories, is a fundamental computing problem of natural and artificial
information processing systems. Ideally, the performance of classifier models
would be evaluated using unambiguous data sets, where the 'correct' assignment
of category labels to the input data vectors is unequivocal. In real-world
problems, however, a significant fraction of actually occurring data vectors
will be located in a boundary zone between or outside of all categories, so
that perfect classification cannot even in principle be achieved. We derive the
theoretical limit for classification accuracy that arises from the overlap of
data categories. By using a surrogate data generation model with adjustable
statistical properties, we show that sufficiently powerful classifiers based on
completely different principles, such as perceptrons and Bayesian models, all
perform at this universal accuracy limit. Remarkably, the accuracy limit is not
affected by applying non-linear transformations to the data, even if these
transformations are non-reversible and drastically reduce the information
content of the input data. We compare emerging data embeddings produced by
supervised and unsupervised training, using MNIST and human EEG recordings
during sleep. We find that categories are not only well separated in the final
layers of classifiers trained with back-propagation, but to a smaller degree
also after unsupervised dimensionality reduction. This suggests that
human-defined categories, such as hand-written digits or sleep stages, can
indeed be considered as 'natural kinds'.
Related papers
- Directly Handling Missing Data in Linear Discriminant Analysis for Enhancing Classification Accuracy and Interpretability [1.4840867281815378]
We introduce a novel and robust classification method, termed weighted missing Linear Discriminant Analysis (WLDA)
WLDA extends Linear Discriminant Analysis (LDA) to handle datasets with missing values without the need for imputation.
We conduct an in-depth theoretical analysis to establish the properties of WLDA and thoroughly evaluate its explainability.
arXiv Detail & Related papers (2024-06-30T14:21:32Z) - Learning from Multiple Unlabeled Datasets with Partial Risk
Regularization [80.54710259664698]
In this paper, we aim to learn an accurate classifier without any class labels.
We first derive an unbiased estimator of the classification risk that can be estimated from the given unlabeled sets.
We then find that the classifier obtained as such tends to cause overfitting as its empirical risks go negative during training.
Experiments demonstrate that our method effectively mitigates overfitting and outperforms state-of-the-art methods for learning from multiple unlabeled sets.
arXiv Detail & Related papers (2022-07-04T16:22:44Z) - Classification of datasets with imputed missing values: does imputation
quality matter? [2.7646249774183]
Classifying samples in incomplete datasets is non-trivial.
We demonstrate how the commonly used measures for assessing quality are flawed.
We propose a new class of discrepancy scores which focus on how well the method recreates the overall distribution of the data.
arXiv Detail & Related papers (2022-06-16T22:58:03Z) - Self-Trained One-class Classification for Unsupervised Anomaly Detection [56.35424872736276]
Anomaly detection (AD) has various applications across domains, from manufacturing to healthcare.
In this work, we focus on unsupervised AD problems whose entire training data are unlabeled and may contain both normal and anomalous samples.
To tackle this problem, we build a robust one-class classification framework via data refinement.
We show that our method outperforms state-of-the-art one-class classification method by 6.3 AUC and 12.5 average precision.
arXiv Detail & Related papers (2021-06-11T01:36:08Z) - Evaluating State-of-the-Art Classification Models Against Bayes
Optimality [106.50867011164584]
We show that we can compute the exact Bayes error of generative models learned using normalizing flows.
We use our approach to conduct a thorough investigation of state-of-the-art classification models.
arXiv Detail & Related papers (2021-06-07T06:21:20Z) - Classification and Uncertainty Quantification of Corrupted Data using
Semi-Supervised Autoencoders [11.300365160909879]
We present a probabilistic approach to classify strongly corrupted data and quantify uncertainty.
A semi-supervised autoencoder trained on uncorrupted data is the underlying architecture.
We show that the model uncertainty strongly depends on whether the classification is correct or wrong.
arXiv Detail & Related papers (2021-05-27T18:47:55Z) - Theoretical Insights Into Multiclass Classification: A High-dimensional
Asymptotic View [82.80085730891126]
We provide the first modernally precise analysis of linear multiclass classification.
Our analysis reveals that the classification accuracy is highly distribution-dependent.
The insights gained may pave the way for a precise understanding of other classification algorithms.
arXiv Detail & Related papers (2020-11-16T05:17:29Z) - Category-Learning with Context-Augmented Autoencoder [63.05016513788047]
Finding an interpretable non-redundant representation of real-world data is one of the key problems in Machine Learning.
We propose a novel method of using data augmentations when training autoencoders.
We train a Variational Autoencoder in such a way, that it makes transformation outcome predictable by auxiliary network.
arXiv Detail & Related papers (2020-10-10T14:04:44Z) - FIND: Human-in-the-Loop Debugging Deep Text Classifiers [55.135620983922564]
We propose FIND -- a framework which enables humans to debug deep learning text classifiers by disabling irrelevant hidden features.
Experiments show that by using FIND, humans can improve CNN text classifiers which were trained under different types of imperfect datasets.
arXiv Detail & Related papers (2020-10-10T12:52:53Z) - Dynamic Decision Boundary for One-class Classifiers applied to
non-uniformly Sampled Data [0.9569316316728905]
A typical issue in Pattern Recognition is the non-uniformly sampled data.
In this paper, we propose a one-class classifier based on the minimum spanning tree with a dynamic decision boundary.
arXiv Detail & Related papers (2020-04-05T18:29:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.