l1-Norm Minimization with Regula Falsi Type Root Finding Methods
- URL: http://arxiv.org/abs/2105.00244v1
- Date: Sat, 1 May 2021 13:24:38 GMT
- Title: l1-Norm Minimization with Regula Falsi Type Root Finding Methods
- Authors: Metin Vural, Aleksandr Y. Aravkin, and S{\l}awomir Stan'czak
- Abstract summary: We develop an efficient approach for non likelihoods using Regula Falsi root-finding techniques.
Practical performance is illustrated using l1-regularized classes t.
- Score: 81.55197998207593
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sparse level-set formulations allow practitioners to find the minimum 1-norm
solution subject to likelihood constraints. Prior art requires this constraint
to be convex. In this letter, we develop an efficient approach for nonconvex
likelihoods, using Regula Falsi root-finding techniques to solve the level-set
formulation. Regula Falsi methods are simple, derivative-free, and efficient,
and the approach provably extends level-set methods to the broader class of
nonconvex inverse problems. Practical performance is illustrated using
l1-regularized Student's t inversion, which is a nonconvex approach used to
develop outlier-robust formulations.
Related papers
- Gradient-Free Method for Heavily Constrained Nonconvex Optimization [43.941106371683766]
Zeroth-order (ZO) method has been shown to be a powerful method for solving the optimization problem where explicit expression of the constraints is difficult or infeasible to obtain.
In this paper, we propose a doubly gradient zeroth-order method (DS) with momentum adaptive step size.
arXiv Detail & Related papers (2024-08-31T13:46:54Z) - Learning Constrained Optimization with Deep Augmented Lagrangian Methods [54.22290715244502]
A machine learning (ML) model is trained to emulate a constrained optimization solver.
This paper proposes an alternative approach, in which the ML model is trained to predict dual solution estimates directly.
It enables an end-to-end training scheme is which the dual objective is as a loss function, and solution estimates toward primal feasibility, emulating a Dual Ascent method.
arXiv Detail & Related papers (2024-03-06T04:43:22Z) - Riemannian stochastic optimization methods avoid strict saddle points [68.80251170757647]
We show that policies under study avoid strict saddle points / submanifolds with probability 1.
This result provides an important sanity check as it shows that, almost always, the limit state of an algorithm can only be a local minimizer.
arXiv Detail & Related papers (2023-11-04T11:12:24Z) - Infeasible Deterministic, Stochastic, and Variance-Reduction Algorithms for Optimization under Orthogonality Constraints [9.301728976515255]
This article provides new practical and theoretical developments for the landing algorithm.
First, the method is extended to the Stiefel manifold.
We also consider variance reduction algorithms when the cost function is an average of many functions.
arXiv Detail & Related papers (2023-03-29T07:36:54Z) - A conditional gradient homotopy method with applications to Semidefinite
Programming [1.6369790794838281]
homotopy-based conditional gradient method for solving convex optimization problems with a large number of simple conic constraints.
Our theoretical complexity is competitive when confronted to state-of-the-art SDP, with the decisive advantage of cheap projection-frees.
arXiv Detail & Related papers (2022-07-07T05:48:27Z) - On Constraints in First-Order Optimization: A View from Non-Smooth
Dynamical Systems [99.59934203759754]
We introduce a class of first-order methods for smooth constrained optimization.
Two distinctive features of our approach are that projections or optimizations over the entire feasible set are avoided.
The resulting algorithmic procedure is simple to implement even when constraints are nonlinear.
arXiv Detail & Related papers (2021-07-17T11:45:13Z) - Regret minimization in stochastic non-convex learning via a
proximal-gradient approach [80.59047515124198]
Motivated by applications in machine learning and operations, we regret with first-order oracle feedback minimization online constrained problems.
We develop a new prox-grad with guarantees proximal complexity reduction techniques.
arXiv Detail & Related papers (2020-10-13T09:22:21Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - Handling the Positive-Definite Constraint in the Bayesian Learning Rule [33.87717973872535]
The Bayesian learning rule is a natural-gradient variational inference method.
When variational parameters lie in an open constraint set, the rule may not satisfy the constraint and requires line-searches which could slow down the algorithm.
Our work makes it easier to apply the rule in the presence of positive-definite constraints in parameter spaces.
arXiv Detail & Related papers (2020-02-24T03:29:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.