Look-Ahead Screening Rules for the Lasso
- URL: http://arxiv.org/abs/2105.05648v1
- Date: Wed, 12 May 2021 13:27:40 GMT
- Title: Look-Ahead Screening Rules for the Lasso
- Authors: Johan Larsson
- Abstract summary: The lasso is a popular method to induce shrinkage and sparsity in the solution vector (coefficients) of regression problems.
We present a new screening strategy: look-ahead screening.
Our method uses safe screening rules to find a range of penalty values for which a given predictor cannot enter the model, thereby screening predictors along the remainder of the path.
- Score: 2.538209532048867
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The lasso is a popular method to induce shrinkage and sparsity in the
solution vector (coefficients) of regression problems, particularly when there
are many predictors relative to the number of observations. Solving the lasso
in this high-dimensional setting can, however, be computationally demanding.
Fortunately, this demand can be alleviated via the use of screening rules that
discard predictors prior to fitting the model, leading to a reduced problem to
be solved. In this paper, we present a new screening strategy: look-ahead
screening. Our method uses safe screening rules to find a range of penalty
values for which a given predictor cannot enter the model, thereby screening
predictors along the remainder of the path. In experiments we show that these
look-ahead screening rules improve the performance of existing screening
strategies.
Related papers
- Decision from Suboptimal Classifiers: Excess Risk Pre- and Post-Calibration [52.70324949884702]
We quantify the excess risk incurred using approximate posterior probabilities in batch binary decision-making.
We identify regimes where recalibration alone addresses most of the regret, and regimes where the regret is dominated by the grouping loss.
On NLP experiments, we show that these quantities identify when the expected gain of more advanced post-training is worth the operational cost.
arXiv Detail & Related papers (2025-03-23T10:52:36Z) - Conformal Generative Modeling with Improved Sample Efficiency through Sequential Greedy Filtering [55.15192437680943]
Generative models lack rigorous statistical guarantees for their outputs.
We propose a sequential conformal prediction method producing prediction sets that satisfy a rigorous statistical guarantee.
This guarantee states that with high probability, the prediction sets contain at least one admissible (or valid) example.
arXiv Detail & Related papers (2024-10-02T15:26:52Z) - Vector-Valued Least-Squares Regression under Output Regularity
Assumptions [73.99064151691597]
We propose and analyse a reduced-rank method for solving least-squares regression problems with infinite dimensional output.
We derive learning bounds for our method, and study under which setting statistical performance is improved in comparison to full-rank method.
arXiv Detail & Related papers (2022-11-16T15:07:00Z) - Bagging in overparameterized learning: Risk characterization and risk
monotonization [2.6534407766508177]
We study the prediction risk of variants of bagged predictors under the proportionals regime.
Specifically, we propose a general strategy to analyze the prediction risk under squared error loss of bagged predictors.
arXiv Detail & Related papers (2022-10-20T17:45:58Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - A Dimensionality Reduction Method for Finding Least Favorable Priors
with a Focus on Bregman Divergence [108.28566246421742]
This paper develops a dimensionality reduction method that allows us to move the optimization to a finite-dimensional setting with an explicit bound on the dimension.
In order to make progress on the problem, we restrict ourselves to Bayesian risks induced by a relatively large class of loss functions, namely Bregman divergences.
arXiv Detail & Related papers (2022-02-23T16:22:28Z) - The Hessian Screening Rule [5.076419064097734]
Hessian Screening Rule uses second-order information from the model to provide more accurate screening.
We show that the rule outperforms all other alternatives in simulated experiments with high correlation.
arXiv Detail & Related papers (2021-04-27T07:55:29Z) - Fast OSCAR and OWL Regression via Safe Screening Rules [97.28167655721766]
Ordered $L_1$ (OWL) regularized regression is a new regression analysis for high-dimensional sparse learning.
Proximal gradient methods are used as standard approaches to solve OWL regression.
We propose the first safe screening rule for OWL regression by exploring the order of the primal solution with the unknown order structure.
arXiv Detail & Related papers (2020-06-29T23:35:53Z) - Analysis and Design of Thompson Sampling for Stochastic Partial
Monitoring [91.22679787578438]
We present a novel Thompson-sampling-based algorithm for partial monitoring.
We prove that the new algorithm achieves the logarithmic problem-dependent expected pseudo-regret $mathrmO(log T)$ for a linearized variant of the problem with local observability.
arXiv Detail & Related papers (2020-06-17T05:48:33Z) - The Strong Screening Rule for SLOPE [5.156484100374058]
We develop a screening rule for SLOPE by examining its subdifferential and show that this rule is a generalization of the strong rule for the lasso.
Our numerical experiments show that the rule performs well in practice, leading to improvements by orders of magnitude for data in the $p gg n$ domain.
arXiv Detail & Related papers (2020-05-07T20:14:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.