Prevalidated ridge regression is a highly-efficient drop-in replacement
for logistic regression for high-dimensional data
- URL: http://arxiv.org/abs/2401.15610v1
- Date: Sun, 28 Jan 2024 09:38:14 GMT
- Title: Prevalidated ridge regression is a highly-efficient drop-in replacement
for logistic regression for high-dimensional data
- Authors: Angus Dempster, Geoffrey I. Webb, Daniel F. Schmidt
- Abstract summary: We present a prevalidated ridge regression model that matches logistic regression in terms of classification error and log-loss.
We scale the coefficients of the model so as to minimise log-loss for a set of prevalidated predictions.
This exploits quantities already computed in the course of fitting the ridge regression model in order to find the scaling parameter with nominal additional computational expense.
- Score: 7.532661545437305
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Logistic regression is a ubiquitous method for probabilistic classification.
However, the effectiveness of logistic regression depends upon careful and
relatively computationally expensive tuning, especially for the regularisation
hyperparameter, and especially in the context of high-dimensional data. We
present a prevalidated ridge regression model that closely matches logistic
regression in terms of classification error and log-loss, particularly for
high-dimensional data, while being significantly more computationally efficient
and having effectively no hyperparameters beyond regularisation. We scale the
coefficients of the model so as to minimise log-loss for a set of prevalidated
predictions derived from the estimated leave-one-out cross-validation error.
This exploits quantities already computed in the course of fitting the ridge
regression model in order to find the scaling parameter with nominal additional
computational expense.
Related papers
- Adaptive Optimization for Prediction with Missing Data [6.800113478497425]
We show that some adaptive linear regression models are equivalent to learning an imputation rule and a downstream linear regression model simultaneously.
In settings where data is strongly not missing at random, our methods achieve a 2-10% improvement in out-of-sample accuracy.
arXiv Detail & Related papers (2024-02-02T16:35:51Z) - A Novel Approach in Solving Stochastic Generalized Linear Regression via
Nonconvex Programming [1.6874375111244329]
This paper considers a generalized linear regression model as a problem with chance constraints.
The results of the proposed algorithm were over 1 to 2 percent better than the ordinary logistic regression model.
arXiv Detail & Related papers (2024-01-16T16:45:51Z) - Semi-Supervised Deep Sobolev Regression: Estimation, Variable Selection
and Beyond [3.782392436834913]
We propose SDORE, a semi-supervised deep Sobolev regressor, for the nonparametric estimation of the underlying regression function and its gradient.
We conduct a comprehensive analysis of the convergence rates of SDORE and establish a minimax optimal rate for the regression function.
We also derive a convergence rate for the associated plug-in gradient estimator, even in the presence of significant domain shift.
arXiv Detail & Related papers (2024-01-09T13:10:30Z) - Deep Generative Symbolic Regression [83.04219479605801]
Symbolic regression aims to discover concise closed-form mathematical equations from data.
Existing methods, ranging from search to reinforcement learning, fail to scale with the number of input variables.
We propose an instantiation of our framework, Deep Generative Symbolic Regression.
arXiv Detail & Related papers (2023-12-30T17:05:31Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Statistical Inference for High-Dimensional Linear Regression with
Blockwise Missing Data [13.48481978963297]
Blockwise missing data occurs when we integrate multisource or multimodality data where different sources or modalities contain complementary information.
We propose a computationally efficient estimator for the regression coefficient vector based on carefully constructed unbiased estimating equations.
Numerical studies and application analysis of the Alzheimer's Disease Neuroimaging Initiative data show that the proposed method performs better and benefits more from unsupervised samples than existing methods.
arXiv Detail & Related papers (2021-06-07T05:12:42Z) - Regression Bugs Are In Your Model! Measuring, Reducing and Analyzing
Regressions In NLP Model Updates [68.09049111171862]
This work focuses on quantifying, reducing and analyzing regression errors in the NLP model updates.
We formulate the regression-free model updates into a constrained optimization problem.
We empirically analyze how model ensemble reduces regression.
arXiv Detail & Related papers (2021-05-07T03:33:00Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - SLOE: A Faster Method for Statistical Inference in High-Dimensional
Logistic Regression [68.66245730450915]
We develop an improved method for debiasing predictions and estimating frequentist uncertainty for practical datasets.
Our main contribution is SLOE, an estimator of the signal strength with convergence guarantees that reduces the computation time of estimation and inference by orders of magnitude.
arXiv Detail & Related papers (2021-03-23T17:48:56Z) - A Hypergradient Approach to Robust Regression without Correspondence [85.49775273716503]
We consider a variant of regression problem, where the correspondence between input and output data is not available.
Most existing methods are only applicable when the sample size is small.
We propose a new computational framework -- ROBOT -- for the shuffled regression problem.
arXiv Detail & Related papers (2020-11-30T21:47:38Z) - A Locally Adaptive Interpretable Regression [7.4267694612331905]
Linear regression is one of the most interpretable prediction models.
In this work, we introduce a locally adaptive interpretable regression (LoAIR)
Our model achieves comparable or better predictive performance than the other state-of-the-art baselines.
arXiv Detail & Related papers (2020-05-07T09:26:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.