Approximate Cross-validation: Guarantees for Model Assessment and
Selection
- URL: http://arxiv.org/abs/2003.00617v2
- Date: Thu, 11 Jun 2020 02:03:47 GMT
- Title: Approximate Cross-validation: Guarantees for Model Assessment and
Selection
- Authors: Ashia Wilson, Maximilian Kasy, Lester Mackey
- Abstract summary: Cross-validation (CV) is a popular approach for assessing and selecting predictive models.
Recent work in empirical risk minimization approximates the expensive refitting with a single Newton warm-started from the full training set.
- Score: 18.77512692975483
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cross-validation (CV) is a popular approach for assessing and selecting
predictive models. However, when the number of folds is large, CV suffers from
a need to repeatedly refit a learning procedure on a large number of training
datasets. Recent work in empirical risk minimization (ERM) approximates the
expensive refitting with a single Newton step warm-started from the full
training set optimizer. While this can greatly reduce runtime, several open
questions remain including whether these approximations lead to faithful model
selection and whether they are suitable for non-smooth objectives. We address
these questions with three main contributions: (i) we provide uniform
non-asymptotic, deterministic model assessment guarantees for approximate CV;
(ii) we show that (roughly) the same conditions also guarantee model selection
performance comparable to CV; (iii) we provide a proximal Newton extension of
the approximate CV framework for non-smooth prediction problems and develop
improved assessment guarantees for problems such as l1-regularized ERM.
Related papers
- Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Iterative Approximate Cross-Validation [13.084578404699174]
Cross-validation (CV) is one of the most popular tools for assessing and selecting predictive models.
In this paper, we propose a new paradigm to efficiently approximate CV when the empirical risk minimization (ERM) problem is solved via an iterative first-order algorithm.
Our new method extends existing guarantees for CV approximation to hold along the whole trajectory of the algorithm, including at convergence.
arXiv Detail & Related papers (2023-03-05T17:56:08Z) - Rethinking Missing Data: Aleatoric Uncertainty-Aware Recommendation [59.500347564280204]
We propose a new Aleatoric Uncertainty-aware Recommendation (AUR) framework.
AUR consists of a new uncertainty estimator along with a normal recommender model.
As the chance of mislabeling reflects the potential of a pair, AUR makes recommendations according to the uncertainty.
arXiv Detail & Related papers (2022-09-22T04:32:51Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - Leave Zero Out: Towards a No-Cross-Validation Approach for Model
Selection [21.06860861548758]
Cross Validation (CV) is the main workhorse for model selection.
CV suffers a conservatively biased estimation, since some part of the limited data has to hold out for validation.
CV tends to be extremely cumbersome, e.g., intolerant time-consuming, due to the repeated training procedures.
arXiv Detail & Related papers (2020-12-24T16:11:53Z) - Approximate Cross-validated Mean Estimates for Bayesian Hierarchical Regression Models [6.824747267214373]
We introduce a novel procedure for obtaining cross-validated predictive estimates for Bayesian hierarchical regression models.
We provide theoretical results and demonstrate its efficacy on publicly available data and in simulations.
arXiv Detail & Related papers (2020-11-29T00:00:20Z) - Approximate Cross-Validation for Structured Models [20.79997929155929]
Gold standard evaluation technique is structured cross-validation (CV)
But CV here can be prohibitively slow due to the need to re-run already-expensive learning algorithms many times.
Previous work has shown approximate cross-validation (ACV) methods provide a fast and provably accurate alternative.
arXiv Detail & Related papers (2020-06-23T00:06:03Z) - Estimating the Prediction Performance of Spatial Models via Spatial
k-Fold Cross Validation [1.7205106391379026]
In machine learning one often assumes the data are independent when evaluating model performance.
spatial autocorrelation (SAC) causes the standard cross validation (CV) methods to produce optimistically biased prediction performance estimates.
We propose a modified version of the CV method called spatial k-fold cross validation (SKCV) which provides a useful estimate for model prediction performance without optimistic bias due to SAC.
arXiv Detail & Related papers (2020-05-28T19:55:18Z) - Pre-training Is (Almost) All You Need: An Application to Commonsense
Reasoning [61.32992639292889]
Fine-tuning of pre-trained transformer models has become the standard approach for solving common NLP tasks.
We introduce a new scoring method that casts a plausibility ranking task in a full-text format.
We show that our method provides a much more stable training phase across random restarts.
arXiv Detail & Related papers (2020-04-29T10:54:40Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.