A Survey of Numerical Algorithms that can Solve the Lasso Problems
- URL: http://arxiv.org/abs/2303.03576v1
- Date: Tue, 7 Mar 2023 01:12:59 GMT
- Title: A Survey of Numerical Algorithms that can Solve the Lasso Problems
- Authors: Yujie Zhao, Xiaoming Huo
- Abstract summary: In statistics, the least absolute shrinkage and selection operator (Lasso) is a regression method that performs both variable selection and regularization.
We summarize five representative algorithms to optimize the objective function in Lasso.
- Score: 2.538209532048867
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In statistics, the least absolute shrinkage and selection operator (Lasso) is
a regression method that performs both variable selection and regularization.
There is a lot of literature available, discussing the statistical properties
of the regression coefficients estimated by the Lasso method. However, there
lacks a comprehensive review discussing the algorithms to solve the
optimization problem in Lasso. In this review, we summarize five representative
algorithms to optimize the objective function in Lasso, including the iterative
shrinkage threshold algorithm (ISTA), fast iterative shrinkage-thresholding
algorithms (FISTA), coordinate gradient descent algorithm (CGDA), smooth L1
algorithm (SLA), and path following algorithm (PFA). Additionally, we also
compare their convergence rate, as well as their potential strengths and
weakness.
Related papers
- A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Coordinate Descent for SLOPE [6.838073951329198]
Sorted L-One Penalized Estimation (SLOPE) is a generalization of the lasso with appealing statistical properties.
Current software packages that fit SLOPE rely on algorithms that perform poorly in high dimensions.
We propose a new fast algorithm to solve the SLOPE optimization problem, which combines proximal gradient descent and proximal coordinate descent steps.
arXiv Detail & Related papers (2022-10-26T15:20:30Z) - An Algebraically Converging Stochastic Gradient Descent Algorithm for
Global Optimization [14.336473214524663]
A key component in the algorithm is the randomness based on the value of the objective function.
We prove the convergence of the algorithm with an algebra and tuning in the parameter space.
We present several numerical examples to demonstrate the efficiency and robustness of the algorithm.
arXiv Detail & Related papers (2022-04-12T16:27:49Z) - Regret Bounds for Expected Improvement Algorithms in Gaussian Process
Bandit Optimization [63.8557841188626]
The expected improvement (EI) algorithm is one of the most popular strategies for optimization under uncertainty.
We propose a variant of EI with a standard incumbent defined via the GP predictive mean.
We show that our algorithm converges, and achieves a cumulative regret bound of $mathcal O(gamma_TsqrtT)$.
arXiv Detail & Related papers (2022-03-15T13:17:53Z) - Smoothed functional-based gradient algorithms for off-policy reinforcement learning: A non-asymptotic viewpoint [8.087699764574788]
We propose two policy gradient algorithms for solving the problem of control in an off-policy reinforcement learning context.
Both algorithms incorporate a smoothed functional (SF) based gradient estimation scheme.
arXiv Detail & Related papers (2021-01-06T17:06:42Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Variance-Reduced Off-Policy Memory-Efficient Policy Search [61.23789485979057]
Off-policy policy optimization is a challenging problem in reinforcement learning.
Off-policy algorithms are memory-efficient and capable of learning from off-policy samples.
arXiv Detail & Related papers (2020-09-14T16:22:46Z) - Provably Convergent Working Set Algorithm for Non-Convex Regularized
Regression [0.0]
This paper proposes a working set algorithm for non-regular regularizers with convergence guarantees.
Our results demonstrate high gain compared to the full problem solver for both block-coordinates or a gradient solver.
arXiv Detail & Related papers (2020-06-24T07:40:31Z) - Efficient Path Algorithms for Clustered Lasso and OSCAR [0.0]
This paper proposes efficient path algorithms for clustered Lasso and OSCAR to construct solution paths with respect to their regularization parameters.
The proposed algorithms are shown to be more efficient than existing algorithms in numerical experiments.
arXiv Detail & Related papers (2020-06-16T07:43:57Z) - Exploiting Higher Order Smoothness in Derivative-free Optimization and
Continuous Bandits [99.70167985955352]
We study the problem of zero-order optimization of a strongly convex function.
We consider a randomized approximation of the projected gradient descent algorithm.
Our results imply that the zero-order algorithm is nearly optimal in terms of sample complexity and the problem parameters.
arXiv Detail & Related papers (2020-06-14T10:42:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.