Accelerated Sparse Bayesian Learning via Screening Test and Its
Applications
- URL: http://arxiv.org/abs/2007.04006v1
- Date: Wed, 8 Jul 2020 10:21:56 GMT
- Title: Accelerated Sparse Bayesian Learning via Screening Test and Its
Applications
- Authors: Yiping Jiang, Tianshi Chen
- Abstract summary: For a linear system, to find the sparsest solution provided with an over-complete dictionary of features directly is typically NP-hard.
We propose sparse Bayesian learning, which uses a parameterized prior to encourage sparsity in solution.
- Score: 0.9916217495995309
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In high-dimensional settings, sparse structures are critical for efficiency
in term of memory and computation complexity. For a linear system, to find the
sparsest solution provided with an over-complete dictionary of features
directly is typically NP-hard, and thus alternative approximate methods should
be considered. In this paper, our choice for alternative method is sparse
Bayesian learning, which, as empirical Bayesian approaches, uses a
parameterized prior to encourage sparsity in solution, rather than the other
methods with fixed priors such as LASSO. Screening test, however, aims at
quickly identifying a subset of features whose coefficients are guaranteed to
be zero in the optimal solution, and then can be safely removed from the
complete dictionary to obtain a smaller, more easily solved problem. Next, we
solve the smaller problem, after which the solution of the original problem can
be recovered by padding the smaller solution with zeros. The performance of the
proposed method will be examined on various data sets and applications.
Related papers
- Discretize Relaxed Solution of Spectral Clustering via a Non-Heuristic
Algorithm [77.53604156112144]
We develop a first-order term to bridge the original problem and discretization algorithm.
Since the non-heuristic method is aware of the original graph cut problem, the final discrete solution is more reliable.
arXiv Detail & Related papers (2023-10-19T13:57:38Z) - Fast Screening Rules for Optimal Design via Quadratic Lasso
Reformulation [0.135975510645475]
In this work, we derive safe screening rules that can be used to discard inessential samples.
The new tests are much faster to compute, especially for problems involving a parameter space of high dimension.
We show how an existing homotopy algorithm to compute the regularization path of the lasso method can be reparametrized with respect to the squared $ell_$-penalty.
arXiv Detail & Related papers (2023-10-13T08:10:46Z) - OKRidge: Scalable Optimal k-Sparse Ridge Regression [21.17964202317435]
We propose a fast algorithm, OKRidge, for sparse ridge regression.
We also propose a method to warm-start our solver, which leverages a beam search.
arXiv Detail & Related papers (2023-04-13T17:34:44Z) - Global and Preference-based Optimization with Mixed Variables using Piecewise Affine Surrogates [0.6083861980670925]
This paper proposes a novel surrogate-based global optimization algorithm to solve linearly constrained mixed-variable problems.
We assume the objective function is black-box and expensive-to-evaluate, while the linear constraints are quantifiable unrelaxable a priori known.
We introduce two types of exploration functions to efficiently search the feasible domain via mixed-integer linear programming solvers.
arXiv Detail & Related papers (2023-02-09T15:04:35Z) - Log Barriers for Safe Black-box Optimization with Application to Safe
Reinforcement Learning [72.97229770329214]
We introduce a general approach for seeking high dimensional non-linear optimization problems in which maintaining safety during learning is crucial.
Our approach called LBSGD is based on applying a logarithmic barrier approximation with a carefully chosen step size.
We demonstrate the effectiveness of our approach on minimizing violation in policy tasks in safe reinforcement learning.
arXiv Detail & Related papers (2022-07-21T11:14:47Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Outlier-Robust Sparse Estimation via Non-Convex Optimization [73.18654719887205]
We explore the connection between high-dimensional statistics and non-robust optimization in the presence of sparsity constraints.
We develop novel and simple optimization formulations for these problems.
As a corollary, we obtain that any first-order method that efficiently converges to station yields an efficient algorithm for these tasks.
arXiv Detail & Related papers (2021-09-23T17:38:24Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Partitioned Least Squares [3.372747046563984]
We show that the new formulation is not convex and provide two alternative methods to deal with the problem.
For the sake of completeness, we provide an alternative branch and bound algorithm that can be used in place of the exact method when the number of partitions is too large.
arXiv Detail & Related papers (2020-06-29T17:10:32Z) - Implicit differentiation of Lasso-type models for hyperparameter
optimization [82.73138686390514]
We introduce an efficient implicit differentiation algorithm, without matrix inversion, tailored for Lasso-type problems.
Our approach scales to high-dimensional data by leveraging the sparsity of the solutions.
arXiv Detail & Related papers (2020-02-20T18:43:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.