Screening Rules and its Complexity for Active Set Identification
- URL: http://arxiv.org/abs/2009.02709v1
- Date: Sun, 6 Sep 2020 11:10:34 GMT
- Title: Screening Rules and its Complexity for Active Set Identification
- Authors: Eugene Ndiaye and Olivier Fercoq and Joseph Salmon
- Abstract summary: We show that screening rules stem from a combination of natural properties of subdifferential sets and optimality conditions.
Under mild assumptions, we analyze the number of iterations needed to identify the optimal active set for any converging algorithm.
- Score: 16.762870396299334
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Screening rules were recently introduced as a technique for explicitly
identifying active structures such as sparsity, in optimization problem arising
in machine learning. This has led to new methods of acceleration based on a
substantial dimension reduction. We show that screening rules stem from a
combination of natural properties of subdifferential sets and optimality
conditions, and can hence be understood in a unified way. Under mild
assumptions, we analyze the number of iterations needed to identify the optimal
active set for any converging algorithm. We show that it only depends on its
convergence rate.
Related papers
- An Accelerated Block Proximal Framework with Adaptive Momentum for
Nonconvex and Nonsmooth Optimization [2.323238724742687]
We propose an accelerated block proximal linear framework with adaptive momentum (ABPL$+$) for nonsmooth and nonsmooth optimization.
We analyze the potential causes of the extrapolation step failing in some algorithms, and resolve this issue by enhancing the comparison process.
We extend our algorithm to any scenario involving updating the gradient step and the linear extrapolation step.
arXiv Detail & Related papers (2023-08-23T13:32:31Z) - The Impact of the Geometric Properties of the Constraint Set in Safe
Optimization with Bandit Feedback [5.758073912084366]
We consider a safe optimization problem with bandit feedback in which an agent sequentially chooses actions and observes responses from the environment.
We propose an algorithm for this problem, and study how the geometric properties of the constraint set impact the regret of the algorithm.
arXiv Detail & Related papers (2023-05-01T15:48:34Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Object Representations as Fixed Points: Training Iterative Refinement
Algorithms with Implicit Differentiation [88.14365009076907]
Iterative refinement is a useful paradigm for representation learning.
We develop an implicit differentiation approach that improves the stability and tractability of training.
arXiv Detail & Related papers (2022-07-02T10:00:35Z) - Exclusive Group Lasso for Structured Variable Selection [10.86544864007391]
A structured variable selection problem is considered.
A composite norm can be properly designed to promote such exclusive group sparsity patterns.
An active set algorithm is proposed that builds the solution by including structure atoms into the estimated support.
arXiv Detail & Related papers (2021-08-23T16:55:13Z) - On Constraints in First-Order Optimization: A View from Non-Smooth
Dynamical Systems [99.59934203759754]
We introduce a class of first-order methods for smooth constrained optimization.
Two distinctive features of our approach are that projections or optimizations over the entire feasible set are avoided.
The resulting algorithmic procedure is simple to implement even when constraints are nonlinear.
arXiv Detail & Related papers (2021-07-17T11:45:13Z) - Screening for Sparse Online Learning [11.523471275501855]
Sparsity promoting regularizers are widely used to impose low-complexity structure (e.g. l1-norm for sparsity) to the regression coefficients of supervised learning.
Most online algorithms do not have the property owing to the vanishing step-size and non-vanishing variance.
We show how to eliminate useless features of the iterates generated by online algorithms, and thereby enforce finite activity identification.
arXiv Detail & Related papers (2021-01-18T10:40:47Z) - Optimization with Momentum: Dynamical, Control-Theoretic, and Symplectic
Perspectives [97.16266088683061]
The article rigorously establishes why symplectic discretization schemes are important for momentum-based optimization algorithms.
It provides a characterization of algorithms that exhibit accelerated convergence.
arXiv Detail & Related papers (2020-02-28T00:32:47Z) - Optimization of Graph Total Variation via Active-Set-based Combinatorial
Reconditioning [48.42916680063503]
We propose a novel adaptive preconditioning strategy for proximal algorithms on this problem class.
We show that nested-forest decomposition of the inactive edges yields a guaranteed local linear convergence rate.
Our results suggest that local convergence analysis can serve as a guideline for selecting variable metrics in proximal algorithms.
arXiv Detail & Related papers (2020-02-27T16:33:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.