The Dutch Draw: Constructing a Universal Baseline for Binary Prediction
Models
- URL: http://arxiv.org/abs/2203.13084v1
- Date: Thu, 24 Mar 2022 14:20:27 GMT
- Title: The Dutch Draw: Constructing a Universal Baseline for Binary Prediction
Models
- Authors: Etienne van de Bijl, Jan Klein, Joris Pries, Sandjai Bhulai, Mark
Hoogendoorn, Rob van der Mei
- Abstract summary: A proper baseline is needed to evaluate the goodness' of a performance score.
This paper presents a universal baseline method for all binary classification models, named the Dutch Draw (DD)
- Score: 2.8816551600116527
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Novel prediction methods should always be compared to a baseline to know how
well they perform. Without this frame of reference, the performance score of a
model is basically meaningless. What does it mean when a model achieves an
$F_1$ of 0.8 on a test set? A proper baseline is needed to evaluate the
`goodness' of a performance score. Comparing with the latest state-of-the-art
model is usually insightful. However, being state-of-the-art can change rapidly
when newer models are developed. Contrary to an advanced model, a simple dummy
classifier could be used. However, the latter could be beaten too easily,
making the comparison less valuable. This paper presents a universal baseline
method for all binary classification models, named the Dutch Draw (DD). This
approach weighs simple classifiers and determines the best classifier to use as
a baseline. We theoretically derive the DD baseline for many commonly used
evaluation measures and show that in most situations it reduces to (almost)
always predicting either zero or one. Summarizing, the DD baseline is: (1)
general, as it is applicable to all binary classification problems; (2) simple,
as it is quickly determined without training or parameter-tuning; (3)
informative, as insightful conclusions can be drawn from the results. The DD
baseline serves two purposes. First, to enable comparisons across research
papers by this robust and universal baseline. Secondly, to provide a sanity
check during the development process of a prediction model. It is a major
warning sign when a model is outperformed by the DD baseline.
Related papers
- Reviving Undersampling for Long-Tailed Learning [16.054442161144603]
We aim to enhance the accuracy of the worst-performing categories and utilize the harmonic mean and geometric mean to assess the model's performance.
We devise a straightforward model ensemble strategy, which does not result in any additional overhead and achieves improved harmonic and geometric mean.
We validate the effectiveness of our approach on widely utilized benchmark datasets for long-tailed learning.
arXiv Detail & Related papers (2024-01-30T08:15:13Z) - Distilling BlackBox to Interpretable models for Efficient Transfer
Learning [19.40897632956169]
Building generalizable AI models is one of the primary challenges in the healthcare domain.
Fine-tuning a model to transfer knowledge from one domain to another requires a significant amount of labeled data in the target domain.
We develop an interpretable model that can be efficiently fine-tuned to an unseen target domain with minimal computational cost.
arXiv Detail & Related papers (2023-05-26T23:23:48Z) - Universal Domain Adaptation from Foundation Models: A Baseline Study [58.51162198585434]
We make empirical studies of state-of-the-art UniDA methods using foundation models.
We introduce textitCLIP distillation, a parameter-free method specifically designed to distill target knowledge from CLIP models.
Although simple, our method outperforms previous approaches in most benchmark tasks.
arXiv Detail & Related papers (2023-05-18T16:28:29Z) - The Optimal Input-Independent Baseline for Binary Classification: The
Dutch Draw [0.0]
The goal of this paper is to examine all baseline methods that are independent of feature values.
By identifying which baseline models are optimal, a crucial selection decision in the evaluation process is simplified.
arXiv Detail & Related papers (2023-01-09T13:11:59Z) - Class-Incremental Learning with Strong Pre-trained Models [97.84755144148535]
Class-incremental learning (CIL) has been widely studied under the setting of starting from a small number of classes (base classes)
We explore an understudied real-world setting of CIL that starts with a strong model pre-trained on a large number of base classes.
Our proposed method is robust and generalizes to all analyzed CIL settings.
arXiv Detail & Related papers (2022-04-07T17:58:07Z) - Deconstructing Distributions: A Pointwise Framework of Learning [15.517383696434162]
We study a point's $textitprofile$: the relationship between models' average performance on the test distribution and their pointwise performance on this individual point.
We find that profiles can yield new insights into the structure of both models and data -- in and out-of-distribution.
arXiv Detail & Related papers (2022-02-20T23:25:28Z) - Mismatched No More: Joint Model-Policy Optimization for Model-Based RL [172.37829823752364]
We propose a single objective for jointly training the model and the policy, such that updates to either component increases a lower bound on expected return.
Our objective is a global lower bound on expected return, and this bound becomes tight under certain assumptions.
The resulting algorithm (MnM) is conceptually similar to a GAN.
arXiv Detail & Related papers (2021-10-06T13:43:27Z) - Enhancing the Generalization for Intent Classification and Out-of-Domain
Detection in SLU [70.44344060176952]
Intent classification is a major task in spoken language understanding (SLU)
Recent works have shown that using extra data and labels can improve the OOD detection performance.
This paper proposes to train a model with only IND data while supporting both IND intent classification and OOD detection.
arXiv Detail & Related papers (2021-06-28T08:27:38Z) - Back to Square One: Bias Detection, Training and Commonsense
Disentanglement in the Winograd Schema [106.79804048131253]
The Winograd (WS) has been proposed as a test for measuring commonsense capabilities of models.
We show that the current evaluation method of WS is sub-optimal and propose a modification that makes use of twin sentences for evaluation.
We conclude that much of the apparent progress on WS may not necessarily reflect progress in commonsense reasoning.
arXiv Detail & Related papers (2021-04-16T15:17:23Z) - One vs Previous and Similar Classes Learning -- A Comparative Study [2.208242292882514]
This work proposes three learning paradigms which allow trained models to be updated without the need of retraining from scratch.
Results show that the proposed paradigms are faster than the baseline at updating, with two of them being faster at training from scratch as well, especially on larger datasets.
arXiv Detail & Related papers (2021-01-05T00:28:38Z) - Pre-training Is (Almost) All You Need: An Application to Commonsense
Reasoning [61.32992639292889]
Fine-tuning of pre-trained transformer models has become the standard approach for solving common NLP tasks.
We introduce a new scoring method that casts a plausibility ranking task in a full-text format.
We show that our method provides a much more stable training phase across random restarts.
arXiv Detail & Related papers (2020-04-29T10:54:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.