A Data-Driven Line Search Rule for Support Recovery in High-dimensional
Data Analysis
- URL: http://arxiv.org/abs/2111.10806v1
- Date: Sun, 21 Nov 2021 12:18:18 GMT
- Title: A Data-Driven Line Search Rule for Support Recovery in High-dimensional
Data Analysis
- Authors: Peili Li, Yuling Jiao, Xiliang Lu, Lican Kang
- Abstract summary: We propose a novel and efficient data-driven line search rule to adaptively determine the appropriate step size.
A large number of comparisons with state-of-the-art algorithms in linear and logistic regression problems show the stability, effectiveness and superiority of the proposed algorithms.
- Score: 5.180648702293017
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we consider the algorithm to the (nonlinear) regression
problems with $\ell_0$ penalty. The existing algorithms for $\ell_0$ based
optimization problem are often carried out with a fixed step size, and the
selection of an appropriate step size depends on the restricted strong
convexity and smoothness for the loss function, hence it is difficult to
compute in practical calculation. In sprite of the ideas of support detection
and root finding \cite{HJK2020}, we proposes a novel and efficient data-driven
line search rule to adaptively determine the appropriate step size. We prove
the $\ell_2$ error bound to the proposed algorithm without much restrictions
for the cost functional. A large number of numerical comparisons with
state-of-the-art algorithms in linear and logistic regression problems show the
stability, effectiveness and superiority of the proposed algorithms.
Related papers
- A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Indexed Minimum Empirical Divergence-Based Algorithms for Linear Bandits [55.938644481736446]
Indexed Minimum Empirical Divergence (IMED) is a highly effective approach to the multi-armed bandit problem.
It has been observed to empirically outperform UCB-based algorithms and Thompson Sampling.
We present novel linear versions of the IMED algorithm, which we call the family of LinIMED algorithms.
arXiv Detail & Related papers (2024-05-24T04:11:58Z) - Fast Screening Rules for Optimal Design via Quadratic Lasso
Reformulation [0.135975510645475]
In this work, we derive safe screening rules that can be used to discard inessential samples.
The new tests are much faster to compute, especially for problems involving a parameter space of high dimension.
We show how an existing homotopy algorithm to compute the regularization path of the lasso method can be reparametrized with respect to the squared $ell_$-penalty.
arXiv Detail & Related papers (2023-10-13T08:10:46Z) - An efficient, provably exact, practical algorithm for the 0-1 loss
linear classification problem [4.418462313508022]
We show that incremental cell (ICE) can solve the 0-1 loss classification problem exactly in time.
This is the first, rigorously-proven, practical algorithm for this long-standing problem.
arXiv Detail & Related papers (2023-06-21T15:41:34Z) - A New Inexact Proximal Linear Algorithm with Adaptive Stopping Criteria
for Robust Phase Retrieval [6.407536646154451]
This paper considers the robust retrieval problem, which can be cast as a nonsmooth and non optimization problem.
We propose a new inexact proximal linear algorithm with the subproblem being solved in two contributions.
arXiv Detail & Related papers (2023-04-25T02:29:33Z) - PROMPT: Parallel Iterative Algorithm for $\ell_{p}$ norm linear
regression via Majorization Minimization with an application to
semi-supervised graph learning [0.0]
We consider the problem of $ell_p$ norm linear regression, which has several applications such as in sparse recovery, data clustering, and semi-supervised learning.
We propose an iterative algorithm : Parallel IteRative AlgOrithM for $ell_P$ norm regression via MajorizaTion Minimization (PROMPT)
arXiv Detail & Related papers (2021-10-23T10:19:11Z) - Online Model Selection for Reinforcement Learning with Function
Approximation [50.008542459050155]
We present a meta-algorithm that adapts to the optimal complexity with $tildeO(L5/6 T2/3)$ regret.
We also show that the meta-algorithm automatically admits significantly improved instance-dependent regret bounds.
arXiv Detail & Related papers (2020-11-19T10:00:54Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z) - A Two-Timescale Framework for Bilevel Optimization: Complexity Analysis
and Application to Actor-Critic [142.1492359556374]
Bilevel optimization is a class of problems which exhibit a two-level structure.
We propose a two-timescale approximation (TTSA) algorithm for tackling such a bilevel problem.
We show that a two-timescale natural actor-critic policy optimization algorithm can be viewed as a special case of our TTSA framework.
arXiv Detail & Related papers (2020-07-10T05:20:02Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z) - Efficient Algorithms for Multidimensional Segmented Regression [42.046881924063044]
We study the fundamental problem of fixed design em multidimensional regression.
We provide the first sample and computationally efficient algorithm for this problem in any fixed dimension.
Our algorithm relies on a simple merging iterative approach, which is novel in the multidimensional setting.
arXiv Detail & Related papers (2020-03-24T19:39:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.