An Exact Solution Path Algorithm for SLOPE and Quasi-Spherical OSCAR
- URL: http://arxiv.org/abs/2010.15511v1
- Date: Thu, 29 Oct 2020 12:03:22 GMT
- Title: An Exact Solution Path Algorithm for SLOPE and Quasi-Spherical OSCAR
- Authors: Shunichi Nomura
- Abstract summary: This study presents a solution path algorithm that provides the complete and exact path of solutions for SLOPE in fine-tuning regularization weights.
It also proposes a new design of a regularization sequence $lambda$ for feature clustering, which is called the quasi-spherical and octagonal shrinkage and clustering algorithm for regression ( QS-OSCAR)
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sorted $L_1$ penalization estimator (SLOPE) is a regularization technique for
sorted absolute coefficients in high-dimensional regression. By arbitrarily
setting its regularization weights $\lambda$ under the monotonicity constraint,
SLOPE can have various feature selection and clustering properties. On weight
tuning, the selected features and their clusters are very sensitive to the
tuning parameters. Moreover, the exhaustive tracking of their changes is
difficult using grid search methods. This study presents a solution path
algorithm that provides the complete and exact path of solutions for SLOPE in
fine-tuning regularization weights. A simple optimality condition for SLOPE is
derived and used to specify the next splitting point of the solution path. This
study also proposes a new design of a regularization sequence $\lambda$ for
feature clustering, which is called the quasi-spherical and octagonal shrinkage
and clustering algorithm for regression (QS-OSCAR). QS-OSCAR is designed with a
contour surface of the regularization terms most similar to a sphere. Among
several regularization sequence designs, sparsity and clustering performance
are compared through simulation studies. The numerical observations show that
QS-OSCAR performs feature clustering more efficiently than other designs.
Related papers
- Shuffled Linear Regression via Spectral Matching [6.24954299842136]
Shuffled linear regression seeks to estimate latent features through a linear transformation.
This problem extends traditional least-squares (LS) and Least Absolute Shrinkage and Selection Operator (LASSO) approaches.
We propose a spectral matching method that efficiently resolves permutations.
arXiv Detail & Related papers (2024-09-30T16:26:40Z) - Instance-Optimal Cluster Recovery in the Labeled Stochastic Block Model [79.46465138631592]
We devise an efficient algorithm that recovers clusters using the observed labels.
We present Instance-Adaptive Clustering (IAC), the first algorithm whose performance matches these lower bounds both in expectation and with high probability.
arXiv Detail & Related papers (2023-06-18T08:46:06Z) - Constrained Optimization via Exact Augmented Lagrangian and Randomized
Iterative Sketching [55.28394191394675]
We develop an adaptive inexact Newton method for equality-constrained nonlinear, nonIBS optimization problems.
We demonstrate the superior performance of our method on benchmark nonlinear problems, constrained logistic regression with data from LVM, and a PDE-constrained problem.
arXiv Detail & Related papers (2023-05-28T06:33:37Z) - Personalized Federated Learning via Convex Clustering [72.15857783681658]
We propose a family of algorithms for personalized federated learning with locally convex user costs.
The proposed framework is based on a generalization of convex clustering in which the differences between different users' models are penalized.
arXiv Detail & Related papers (2022-02-01T19:25:31Z) - Adaptive Sequential SAA for Solving Two-stage Stochastic Linear Programs [1.6181085766811525]
We present adaptive sequential SAA (sample average approximation) algorithms to solve large-scale two-stage linear programs.
The proposed algorithm can be stopped in finite-time to return a solution endowed with a probabilistic guarantee on quality.
arXiv Detail & Related papers (2020-12-07T14:58:16Z) - Bayesian Sparse learning with preconditioned stochastic gradient MCMC
and its applications [5.660384137948734]
The proposed algorithm converges to the correct distribution with a controllable bias under mild conditions.
We show that the proposed algorithm canally converge to the correct distribution with a controllable bias under mild conditions.
arXiv Detail & Related papers (2020-06-29T20:57:20Z) - Certifying clusters from sum-of-norms clustering [13.747619681451875]
We present a clustering test that identifies and certifies the correct cluster assignment from an approximate solution.
We show the correct cluster assignment is guaranteed to be certified by a primal-dual path following algorithm after sufficiently many iterations.
arXiv Detail & Related papers (2020-06-19T20:26:26Z) - Efficient Path Algorithms for Clustered Lasso and OSCAR [0.0]
This paper proposes efficient path algorithms for clustered Lasso and OSCAR to construct solution paths with respect to their regularization parameters.
The proposed algorithms are shown to be more efficient than existing algorithms in numerical experiments.
arXiv Detail & Related papers (2020-06-16T07:43:57Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Local Graph Clustering with Network Lasso [90.66817876491052]
We study the statistical and computational properties of a network Lasso method for local graph clustering.
The clusters delivered by nLasso can be characterized elegantly via network flows between cluster boundary and seed nodes.
arXiv Detail & Related papers (2020-04-25T17:52:05Z) - Non-Adaptive Adaptive Sampling on Turnstile Streams [57.619901304728366]
We give the first relative-error algorithms for column subset selection, subspace approximation, projective clustering, and volume on turnstile streams that use space sublinear in $n$.
Our adaptive sampling procedure has a number of applications to various data summarization problems that either improve state-of-the-art or have only been previously studied in the more relaxed row-arrival model.
arXiv Detail & Related papers (2020-04-23T05:00:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.