Classification of datasets with imputed missing values: does imputation
quality matter?
- URL: http://arxiv.org/abs/2206.08478v1
- Date: Thu, 16 Jun 2022 22:58:03 GMT
- Title: Classification of datasets with imputed missing values: does imputation
quality matter?
- Authors: Tolou Shadbahr and Michael Roberts and Jan Stanczuk and Julian Gilbey
and Philip Teare, S\"oren Dittmer, Matthew Thorpe, Ramon Vinas Torne, Evis
Sala, Pietro Lio, Mishal Patel, AIX-COVNET Collaboration, James H.F. Rudd,
Tuomas Mirtti, Antti Rannikko, John A.D. Aston, Jing Tang, Carola-Bibiane
Sch\"onlieb
- Abstract summary: Classifying samples in incomplete datasets is non-trivial.
We demonstrate how the commonly used measures for assessing quality are flawed.
We propose a new class of discrepancy scores which focus on how well the method recreates the overall distribution of the data.
- Score: 2.7646249774183
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Classifying samples in incomplete datasets is a common aim for machine
learning practitioners, but is non-trivial. Missing data is found in most
real-world datasets and these missing values are typically imputed using
established methods, followed by classification of the now complete, imputed,
samples. The focus of the machine learning researcher is then to optimise the
downstream classification performance. In this study, we highlight that it is
imperative to consider the quality of the imputation. We demonstrate how the
commonly used measures for assessing quality are flawed and propose a new class
of discrepancy scores which focus on how well the method recreates the overall
distribution of the data. To conclude, we highlight the compromised
interpretability of classifier models trained using poorly imputed data.
Related papers
- Enhancing Image Classification in Small and Unbalanced Datasets through Synthetic Data Augmentation [0.0]
This paper introduces a novel synthetic augmentation strategy using class-specific Variational Autoencoders (VAEs) and latent space to improve discrimination capabilities.
By generating realistic, varied synthetic data that fills feature space gaps, we address issues of data scarcity and class imbalance.
The proposed strategy was tested in a small dataset of 321 images created to train and validate an automatic method for assessing the quality of cleanliness of esophagogastroduodenoscopy images.
arXiv Detail & Related papers (2024-09-16T13:47:52Z) - Fair Classification with Partial Feedback: An Exploration-Based Data Collection Approach [15.008626822593]
In many predictive contexts, true outcomes are only observed for samples that were positively classified in the past.
We present an approach that trains a classifier using available data and comes with a family of exploration strategies.
We show that this approach consistently boosts the quality of collected outcome data and improves the fraction of true positives for all groups.
arXiv Detail & Related papers (2024-02-17T17:09:19Z) - XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners [71.8257151788923]
We propose a novel Explainable Active Learning framework (XAL) for low-resource text classification.
XAL encourages classifiers to justify their inferences and delve into unlabeled data for which they cannot provide reasonable explanations.
Experiments on six datasets show that XAL achieves consistent improvement over 9 strong baselines.
arXiv Detail & Related papers (2023-10-09T08:07:04Z) - Mutual Information Learned Classifiers: an Information-theoretic
Viewpoint of Training Deep Learning Classification Systems [9.660129425150926]
Cross entropy loss can easily lead us to find models which demonstrate severe overfitting behavior.
In this paper, we prove that the existing cross entropy loss minimization for training DNN classifiers essentially learns the conditional entropy of the underlying data distribution.
We propose a mutual information learning framework where we train DNN classifiers via learning the mutual information between the label and input.
arXiv Detail & Related papers (2022-10-03T15:09:19Z) - Classification at the Accuracy Limit -- Facing the Problem of Data
Ambiguity [0.0]
We show the theoretical limit for classification accuracy that arises from the overlap of data categories.
We compare emerging data embeddings produced by supervised and unsupervised training, using MNIST and human EEG recordings during sleep.
This suggests that human-defined categories, such as hand-written digits or sleep stages, can indeed be considered as 'natural kinds'
arXiv Detail & Related papers (2022-06-04T07:00:32Z) - Self-Trained One-class Classification for Unsupervised Anomaly Detection [56.35424872736276]
Anomaly detection (AD) has various applications across domains, from manufacturing to healthcare.
In this work, we focus on unsupervised AD problems whose entire training data are unlabeled and may contain both normal and anomalous samples.
To tackle this problem, we build a robust one-class classification framework via data refinement.
We show that our method outperforms state-of-the-art one-class classification method by 6.3 AUC and 12.5 average precision.
arXiv Detail & Related papers (2021-06-11T01:36:08Z) - Evaluating State-of-the-Art Classification Models Against Bayes
Optimality [106.50867011164584]
We show that we can compute the exact Bayes error of generative models learned using normalizing flows.
We use our approach to conduct a thorough investigation of state-of-the-art classification models.
arXiv Detail & Related papers (2021-06-07T06:21:20Z) - Semi-supervised Long-tailed Recognition using Alternate Sampling [95.93760490301395]
Main challenges in long-tailed recognition come from the imbalanced data distribution and sample scarcity in its tail classes.
We propose a new recognition setting, namely semi-supervised long-tailed recognition.
We demonstrate significant accuracy improvements over other competitive methods on two datasets.
arXiv Detail & Related papers (2021-05-01T00:43:38Z) - Out-distribution aware Self-training in an Open World Setting [62.19882458285749]
We leverage unlabeled data in an open world setting to further improve prediction performance.
We introduce out-distribution aware self-training, which includes a careful sample selection strategy.
Our classifiers are by design out-distribution aware and can thus distinguish task-related inputs from unrelated ones.
arXiv Detail & Related papers (2020-12-21T12:25:04Z) - Imputation of Missing Data with Class Imbalance using Conditional
Generative Adversarial Networks [24.075691766743702]
We propose a new method for imputing missing data based on its class-specific characteristics.
Our Conditional Generative Adversarial Imputation Network (CGAIN) imputes the missing data using class-specific distributions.
We tested our approach on benchmark datasets and achieved superior performance compared with the state-of-the-art and popular imputation approaches.
arXiv Detail & Related papers (2020-12-01T02:26:54Z) - Learning with Out-of-Distribution Data for Audio Classification [60.48251022280506]
We show that detecting and relabelling certain OOD instances, rather than discarding them, can have a positive effect on learning.
The proposed method is shown to improve the performance of convolutional neural networks by a significant margin.
arXiv Detail & Related papers (2020-02-11T21:08:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.