An adaptive shortest-solution guided decimation approach to sparse
high-dimensional linear regression
- URL: http://arxiv.org/abs/2211.15057v1
- Date: Mon, 28 Nov 2022 04:29:57 GMT
- Title: An adaptive shortest-solution guided decimation approach to sparse
high-dimensional linear regression
- Authors: Xue Yu, Yifan Sun, Haijun Zhou
- Abstract summary: ASSD is adapted from the shortest solution-guided algorithm and is referred to as ASSD.
ASSD is especially suitable for linear regression problems with highly correlated measurement matrices encountered in real-world applications.
- Score: 2.3759847811293766
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High-dimensional linear regression model is the most popular statistical
model for high-dimensional data, but it is quite a challenging task to achieve
a sparse set of regression coefficients. In this paper, we propose a simple
heuristic algorithm to construct sparse high-dimensional linear regression
models, which is adapted from the shortest solution-guided decimation algorithm
and is referred to as ASSD. This algorithm constructs the support of regression
coefficients under the guidance of the least-squares solution of the
recursively decimated linear equations, and it applies an early-stopping
criterion and a second-stage thresholding procedure to refine this support. Our
extensive numerical results demonstrate that ASSD outperforms LASSO, vector
approximate message passing, and two other representative greedy algorithms in
solution accuracy and robustness. ASSD is especially suitable for linear
regression problems with highly correlated measurement matrices encountered in
real-world applications.
Related papers
- Gradient-based bilevel optimization for multi-penalty Ridge regression
through matrix differential calculus [0.46040036610482665]
We introduce a gradient-based approach to the problem of linear regression with l2-regularization.
We show that our approach outperforms LASSO, Ridge, and Elastic Net regression.
The analytical of the gradient proves to be more efficient in terms of computational time compared to automatic differentiation.
arXiv Detail & Related papers (2023-11-23T20:03:51Z) - Stochastic Mirror Descent for Large-Scale Sparse Recovery [13.500750042707407]
We discuss an application of quadratic Approximation to statistical estimation of high-dimensional sparse parameters.
We show that the proposed algorithm attains the optimal convergence of the estimation error under weak assumptions on the regressor distribution.
arXiv Detail & Related papers (2022-10-23T23:23:23Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - An Accelerated Doubly Stochastic Gradient Method with Faster Explicit
Model Identification [97.28167655721766]
We propose a novel doubly accelerated gradient descent (ADSGD) method for sparsity regularized loss minimization problems.
We first prove that ADSGD can achieve a linear convergence rate and lower overall computational complexity.
arXiv Detail & Related papers (2022-08-11T22:27:22Z) - Robust Regularized Low-Rank Matrix Models for Regression and
Classification [14.698622796774634]
We propose a framework for matrix variate regression models based on a rank constraint, vector regularization (e.g., sparsity), and a general loss function.
We show that the algorithm is guaranteed to converge, all accumulation points of the algorithm have estimation errors in the order of $O(sqrtn)$ally and substantially attaining the minimax rate.
arXiv Detail & Related papers (2022-05-14T18:03:48Z) - Benign Overfitting of Constant-Stepsize SGD for Linear Regression [122.70478935214128]
inductive biases are central in preventing overfitting empirically.
This work considers this issue in arguably the most basic setting: constant-stepsize SGD for linear regression.
We reflect on a number of notable differences between the algorithmic regularization afforded by (unregularized) SGD in comparison to ordinary least squares.
arXiv Detail & Related papers (2021-03-23T17:15:53Z) - Piecewise linear regression and classification [0.20305676256390928]
This paper proposes a method for solving multivariate regression and classification problems using piecewise linear predictors.
A Python implementation of the algorithm described in this paper is available at http://cse.lab.imtlucca.it/bemporad/parc.
arXiv Detail & Related papers (2021-03-10T17:07:57Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Fast OSCAR and OWL Regression via Safe Screening Rules [97.28167655721766]
Ordered $L_1$ (OWL) regularized regression is a new regression analysis for high-dimensional sparse learning.
Proximal gradient methods are used as standard approaches to solve OWL regression.
We propose the first safe screening rule for OWL regression by exploring the order of the primal solution with the unknown order structure.
arXiv Detail & Related papers (2020-06-29T23:35:53Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z) - Nested Model Averaging on Solution Path for High-dimensional Linear
Regression [12.173071615025504]
We study the nested model averaging method on the solution path for a high-dimensional linear regression problem.
We propose to combine model averaging with regularized estimators (e.g., lasso and SLOPE) on the solution path for high-dimensional linear regression.
A real data analysis on predicting the per capita violent crime in the United States shows an outstanding performance of the nested model averaging with lasso.
arXiv Detail & Related papers (2020-05-16T18:09:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.