Piecewise Linearity of Min-Norm Solution Map of a Nonconvexly Regularized Convex Sparse Model
- URL: http://arxiv.org/abs/2311.18438v3
- Date: Tue, 12 Nov 2024 15:57:13 GMT
- Title: Piecewise Linearity of Min-Norm Solution Map of a Nonconvexly Regularized Convex Sparse Model
- Authors: Yi Zhang, Isao Yamada,
- Abstract summary: We study the piecewise constant sparsity pattern $mathbfx_star(mathbfy,da)$ in each linear zone.
We iteratively computes the closed-form expression of $mathbfx_star(mathbfy,da)$ in each linear zone.
- Score: 8.586951231230596
- License:
- Abstract: It is well known that the minimum $\ell_2$-norm solution of the convex LASSO model, say $\mathbf{x}_{\star}$, is a continuous piecewise linear function of the regularization parameter $\lambda$, and its signed sparsity pattern is constant within each linear piece. The current study is an extension of this classic result, proving that the aforementioned properties extend to the min-norm solution map $\mathbf{x}_{\star}(\mathbf{y},\lambda)$, where $\mathbf{y}$ is the observed signal, for a generalization of LASSO termed the scaled generalized minimax concave (sGMC) model. The sGMC model adopts a nonconvex debiased variant of the $\ell_1$-norm as sparse regularizer, but its objective function is overall-convex. Based on the geometric properties of $\mathbf{x}_{\star}(\mathbf{y},\lambda)$, we propose an extension of the least angle regression (LARS) algorithm, which iteratively computes the closed-form expression of $\mathbf{x}_{\star}(\mathbf{y},\lambda)$ in each linear zone. Under suitable conditions, the proposed algorithm provably obtains the whole solution map $\mathbf{x}_{\star}(\mathbf{y},\lambda)$ within finite iterations. Notably, our proof techniques for establishing continuity and piecewise linearity of $\mathbf{x}_{\star}(\mathbf{y},\lambda)$ are novel, and they lead to two side contributions: (a) our proofs establish continuity of the sGMC solution set as a set-valued mapping of $(\mathbf{y},\lambda)$; (b) to prove piecewise linearity and piecewise constant sparsity pattern of $\mathbf{x}_{\star}(\mathbf{y},\lambda)$, we do not require any assumption that previous work relies on (whereas to prove some additional properties of $\mathbf{x}_{\star}(\mathbf{y},\lambda)$, we use a different set of assumptions from previous work).
Related papers
- Efficient Continual Finite-Sum Minimization [52.5238287567572]
We propose a key twist into the finite-sum minimization, dubbed as continual finite-sum minimization.
Our approach significantly improves upon the $mathcalO(n/epsilon)$ FOs that $mathrmStochasticGradientDescent$ requires.
We also prove that there is no natural first-order method with $mathcalOleft(n/epsilonalpharight)$ complexity gradient for $alpha 1/4$, establishing that the first-order complexity of our method is nearly tight.
arXiv Detail & Related papers (2024-06-07T08:26:31Z) - Provably learning a multi-head attention layer [55.2904547651831]
Multi-head attention layer is one of the key components of the transformer architecture that sets it apart from traditional feed-forward models.
In this work, we initiate the study of provably learning a multi-head attention layer from random examples.
We prove computational lower bounds showing that in the worst case, exponential dependence on $m$ is unavoidable.
arXiv Detail & Related papers (2024-02-06T15:39:09Z) - Optimal Estimator for Linear Regression with Shuffled Labels [17.99906229036223]
This paper considers the task of linear regression with shuffled labels.
$mathbf Y in mathbb Rntimes m, mathbf Pi in mathbb Rntimes p, mathbf B in mathbb Rptimes m$, and $mathbf Win mathbb Rntimes m$, respectively.
arXiv Detail & Related papers (2023-10-02T16:44:47Z) - A Unified Framework for Uniform Signal Recovery in Nonlinear Generative
Compressed Sensing [68.80803866919123]
Under nonlinear measurements, most prior results are non-uniform, i.e., they hold with high probability for a fixed $mathbfx*$ rather than for all $mathbfx*$ simultaneously.
Our framework accommodates GCS with 1-bit/uniformly quantized observations and single index models as canonical examples.
We also develop a concentration inequality that produces tighter bounds for product processes whose index sets have low metric entropy.
arXiv Detail & Related papers (2023-09-25T17:54:19Z) - Accelerated Methods for Riemannian Min-Max Optimization Ensuring Bounded
Geometric Penalties [21.141544548229774]
We study the form $min_x max_y f(x, y) where $mathcalN$ are Hadamard.
We show global interest accelerated by reducing gradient convergence constants.
arXiv Detail & Related papers (2023-05-25T15:43:07Z) - Convergence of a Normal Map-based Prox-SGD Method under the KL
Inequality [0.0]
We present a novel map-based algorithm ($mathsfnorMtext-mathsfSGD$) for $symbol$k$ convergence problems.
arXiv Detail & Related papers (2023-05-10T01:12:11Z) - Learning a Single Neuron with Adversarial Label Noise via Gradient
Descent [50.659479930171585]
We study a function of the form $mathbfxmapstosigma(mathbfwcdotmathbfx)$ for monotone activations.
The goal of the learner is to output a hypothesis vector $mathbfw$ that $F(mathbbw)=C, epsilon$ with high probability.
arXiv Detail & Related papers (2022-06-17T17:55:43Z) - Fast Graph Sampling for Short Video Summarization using Gershgorin Disc
Alignment [52.577757919003844]
We study the problem of efficiently summarizing a short video into several paragraphs, leveraging recent progress in fast graph sampling.
Experimental results show that our algorithm achieves comparable video summarization as state-of-the-art methods, at a substantially reduced complexity.
arXiv Detail & Related papers (2021-10-21T18:43:00Z) - Minimax Optimal Regression over Sobolev Spaces via Laplacian
Regularization on Neighborhood Graphs [25.597646488273558]
We study the statistical properties of Laplacian smoothing, a graph-based approach to nonparametric regression.
We prove that Laplacian smoothing is manifold-adaptive.
arXiv Detail & Related papers (2021-06-03T01:20:41Z) - Optimal Regret Algorithm for Pseudo-1d Bandit Convex Optimization [51.23789922123412]
We study online learning with bandit feedback (i.e. learner has access to only zeroth-order oracle) where cost/reward functions admit a "pseudo-1d" structure.
We show a lower bound of $min(sqrtdT, T3/4)$ for the regret of any algorithm, where $T$ is the number of rounds.
We propose a new algorithm sbcalg that combines randomized online gradient descent with a kernelized exponential weights method to exploit the pseudo-1d structure effectively.
arXiv Detail & Related papers (2021-02-15T08:16:51Z) - On the Optimal Weighted $\ell_2$ Regularization in Overparameterized
Linear Regression [23.467801864841526]
We consider the linear model $mathbfy = mathbfX mathbfbeta_star + mathbfepsilon$ with $mathbfXin mathbbRntimes p$ in the overparameterized regime $p>n$.
We provide an exact characterization of the prediction risk $mathbbE(y-mathbfxThatmathbfbeta_lambda)2$ in proportional limit $p/n
arXiv Detail & Related papers (2020-06-10T12:38:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.