Testing the Consistency of Performance Scores Reported for Binary
Classification Problems
- URL: http://arxiv.org/abs/2310.12527v1
- Date: Thu, 19 Oct 2023 07:04:29 GMT
- Title: Testing the Consistency of Performance Scores Reported for Binary
Classification Problems
- Authors: Attila Fazekas and Gy\"orgy Kov\'acs
- Abstract summary: We introduce numerical techniques to assess the consistency of reported performance scores and the assumed experimental setup.
We demonstrate how the proposed techniques can effectively detect inconsistencies, thereby safeguarding the integrity of research fields.
To benefit the scientific community, we have made the consistency tests available in an open-source Python package.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Binary classification is a fundamental task in machine learning, with
applications spanning various scientific domains. Whether scientists are
conducting fundamental research or refining practical applications, they
typically assess and rank classification techniques based on performance
metrics such as accuracy, sensitivity, and specificity. However, reported
performance scores may not always serve as a reliable basis for research
ranking. This can be attributed to undisclosed or unconventional practices
related to cross-validation, typographical errors, and other factors. In a
given experimental setup, with a specific number of positive and negative test
items, most performance scores can assume specific, interrelated values. In
this paper, we introduce numerical techniques to assess the consistency of
reported performance scores and the assumed experimental setup. Importantly,
the proposed approach does not rely on statistical inference but uses numerical
methods to identify inconsistencies with certainty. Through three different
applications related to medicine, we demonstrate how the proposed techniques
can effectively detect inconsistencies, thereby safeguarding the integrity of
research fields. To benefit the scientific community, we have made the
consistency tests available in an open-source Python package.
Related papers
- ValUES: A Framework for Systematic Validation of Uncertainty Estimation in Semantic Segmentation [2.1517210693540005]
Uncertainty estimation is an essential and heavily-studied component for semantic segmentation methods.
Can data-related and model-related uncertainty really be separated in practice?
Which components of an uncertainty method are essential for real-world performance?
arXiv Detail & Related papers (2024-01-16T17:02:21Z) - mlscorecheck: Testing the consistency of reported performance scores and
experiments in machine learning [0.0]
We have developed numerical techniques capable of identifying inconsistencies between reported performance scores and various experimental setups in machine learning problems.
These consistency tests are integrated into the open-source package mlscorecheck.
arXiv Detail & Related papers (2023-11-13T18:31:48Z) - Too Good To Be True: performance overestimation in (re)current practices
for Human Activity Recognition [49.1574468325115]
sliding windows for data segmentation followed by standard random k-fold cross validation produce biased results.
It is important to raise awareness in the scientific community about this problem, whose negative effects are being overlooked.
Several experiments with different types of datasets and different types of classification models allow us to exhibit the problem and show it persists independently of the method or dataset.
arXiv Detail & Related papers (2023-10-18T13:24:05Z) - A Double Machine Learning Approach to Combining Experimental and Observational Data [59.29868677652324]
We propose a double machine learning approach to combine experimental and observational studies.
Our framework tests for violations of external validity and ignorability under milder assumptions.
arXiv Detail & Related papers (2023-07-04T02:53:11Z) - Systematic Evaluation of Predictive Fairness [60.0947291284978]
Mitigating bias in training on biased datasets is an important open problem.
We examine the performance of various debiasing methods across multiple tasks.
We find that data conditions have a strong influence on relative model performance.
arXiv Detail & Related papers (2022-10-17T05:40:13Z) - Breaking Down Out-of-Distribution Detection: Many Methods Based on OOD
Training Data Estimate a Combination of the Same Core Quantities [104.02531442035483]
The goal of this paper is to recognize common objectives as well as to identify the implicit scoring functions of different OOD detection methods.
We show that binary discrimination between in- and (different) out-distributions is equivalent to several distinct formulations of the OOD detection problem.
We also show that the confidence loss which is used by Outlier Exposure has an implicit scoring function which differs in a non-trivial fashion from the theoretically optimal scoring function.
arXiv Detail & Related papers (2022-06-20T16:32:49Z) - Evaluating Causal Inference Methods [0.4588028371034407]
We introduce a deep generative model-based framework, Credence, to validate causal inference methods.
Our work introduces a deep generative model-based framework, Credence, to validate causal inference methods.
arXiv Detail & Related papers (2022-02-09T00:21:22Z) - Learning to Rank Anomalies: Scalar Performance Criteria and Maximization
of Two-Sample Rank Statistics [0.0]
We propose a data-driven scoring function defined on the feature space which reflects the degree of abnormality of the observations.
This scoring function is learnt through a well-designed binary classification problem.
We illustrate our methodology with preliminary encouraging numerical experiments.
arXiv Detail & Related papers (2021-09-20T14:45:56Z) - Uncertainty Baselines: Benchmarks for Uncertainty & Robustness in Deep
Learning [66.59455427102152]
We introduce Uncertainty Baselines: high-quality implementations of standard and state-of-the-art deep learning methods on a variety of tasks.
Each baseline is a self-contained experiment pipeline with easily reusable and extendable components.
We provide model checkpoints, experiment outputs as Python notebooks, and leaderboards for comparing results.
arXiv Detail & Related papers (2021-06-07T23:57:32Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.